Difference between revisions of "Generalized Expectation Criteria"
Line 5: | Line 5: | ||
[[AddressesProblem::Support Vector Machines]] or [[AddressesProblem::Conditional Random Fields]] to efficiently optimize the objective function, especially in the online setting. Stochastic optimizations like this method are known to be faster when trained with large, redundant data sets. | [[AddressesProblem::Support Vector Machines]] or [[AddressesProblem::Conditional Random Fields]] to efficiently optimize the objective function, especially in the online setting. Stochastic optimizations like this method are known to be faster when trained with large, redundant data sets. | ||
− | == | + | == Expectation == |
Let <math>X</math> be some set of variables and their assignments be <math>\mathbf{x}\in\mathcal{X}</math>. Let <math>\theta</math> be the parameters of a model that defines a probability distribution <math>p_{\theta}(X)</math>. The expectation of a function <math>f(X)</math> according to the model is | Let <math>X</math> be some set of variables and their assignments be <math>\mathbf{x}\in\mathcal{X}</math>. Let <math>\theta</math> be the parameters of a model that defines a probability distribution <math>p_{\theta}(X)</math>. The expectation of a function <math>f(X)</math> according to the model is | ||
Line 20: | Line 20: | ||
E_{\theta}[f(X,Y)\vert\tilde{\mathcal{X}}] | E_{\theta}[f(X,Y)\vert\tilde{\mathcal{X}}] | ||
= | = | ||
− | {1\over\vert\tilde{\mathcal{X}}\vert}\sum_{\mathbf{x}\in\mathcal{\tilde{\mathcal{X}}}}{\sum_{\mathbf{y}\in Y}{p_{\theta}(\mathbf{x})f(\mathbf{x})}} | + | {1\over\vert\tilde{\mathcal{X}}\vert}\sum_{\mathbf{x}\in\mathcal{\tilde{\mathcal{X}}}}{\sum_{\mathbf{y}\in Y}{p_{\theta}(\mathbf{y}\vert\mathbf{x})f(\mathbf{x},\mathbf{y})}} |
</math> | </math> | ||
+ | == Generalized Expectation == | ||
+ | |||
+ | A generalized expectation (GE) criteria is a function G that takes the model's expectation of <math>f(X)</math> as an argument and returns a scalar. The criteria is then added as a term in the parameter estimation objective function. | ||
<math> | <math> | ||
− | + | G(E_{\theta}[f(X)])\rightarrow\mathbb{R} | |
</math> | </math> | ||
− | for | + | Or <math>G</math> can be defined based on a distance to a target value for <math>E_{\theta}[f(X)]</math>. Let <math>\tilde{f}</math> be the target value and <math>\Delta(\cdot,\cdot)</math> be some distance function, then we can define <math>G</math> in the following way: |
− | + | <math> | |
− | + | G_{\tilde{f}}(E_{\theta}[f(X)]) | |
− | + | = | |
− | + | -\Delta(E_{\theta}[f(X)],\tilde{f}) | |
+ | </math> | ||
== Stochastic Gradient Descent == | == Stochastic Gradient Descent == | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Pros == | == Pros == | ||
Line 64: | Line 52: | ||
== Related Papers == | == Related Papers == | ||
− | * [[ RelatedPaper:: | + | * [[ RelatedPaper::Bellare_2009_generalized_expectation_criteria_for_bootstrapping_extractors_using_record_text_alignment ]] |
− | * [[ RelatedPaper:: | + | * [[ RelatedPaper::Mann and McCallum, ICML 2007 ]] |
Revision as of 13:28, 2 November 2011
Contents
Summary
This can be viewed as a parameter estimation method that can augment/replace traditional parameter estimation methods such as maximum likelihood estimation. M
Support Vector Machines or Conditional Random Fields to efficiently optimize the objective function, especially in the online setting. Stochastic optimizations like this method are known to be faster when trained with large, redundant data sets.
Expectation
Let be some set of variables and their assignments be . Let be the parameters of a model that defines a probability distribution . The expectation of a function according to the model is
We can partition the variables into "input" variables and "output" variables that is conditioned on the input variables. When the assignment of the input variables are provided, the conditional expectation is
Generalized Expectation
A generalized expectation (GE) criteria is a function G that takes the model's expectation of as an argument and returns a scalar. The criteria is then added as a term in the parameter estimation objective function.
Or can be defined based on a distance to a target value for . Let be the target value and be some distance function, then we can define in the following way:
Stochastic Gradient Descent
Pros
When this method is used for very large data sets that has redundant information among examples, it is much faster than the plain gradient descent because it requires less computation each iteration. Also, it is known to be better with noisy data since it samples example to compute gradient.
Cons
The convergence rate is slower than second-order gradient methods. However the speedup coming from computationally efficient iterations are usually greater and the method can converge faster if learning rate is adjusted as the procedure goes on. Also it tends to keep bouncing around the minimum unless the learning rate is reduced in the later iterations.