Difference between revisions of "GeneralizedIterativeScaling"
From Cohen Courses
Jump to navigationJump to searchLine 14: | Line 14: | ||
(2) \quad \quad \sum_{i \in I} b_{si}p_i = k_s | (2) \quad \quad \sum_{i \in I} b_{si}p_i = k_s | ||
</math> | </math> | ||
+ | |||
+ | where <math>I</math> is an index set; the probability distribution over which has to be determined, <math>p</math> is a probability distribution and </math>\pi</math> is a subprobability function (adds to 1 but <math>\pi_i \neq 0</math> for any </math>i</math>; <math>b_{si} \neq 0</math> is constant. | ||
== Existence of a solution == | == Existence of a solution == | ||
+ | |||
+ | If <math>p</math> of form (1) exists satisfying (2), then it minimizes <math>KL[p, \pi]</math> and is unique. |
Revision as of 09:54, 27 September 2011
This is one of the earliest methods used for inference in log-linear models. Though more sophisticated and faster methods have evolved, this method provides an insight in log linear models.
What problem does it address
The objective of this method is to find a probability function of the form
satisfying the constraints
where is an index set; the probability distribution over which has to be determined, is a probability distribution and </math>\pi</math> is a subprobability function (adds to 1 but for any </math>i</math>; is constant.
Existence of a solution
If of form (1) exists satisfying (2), then it minimizes and is unique.