Difference between revisions of "Globerson et al. ICML 2007. Exponentiated Gradient Algorithms for Log Linear Structured Prediction"

From Cohen Courses
Jump to navigationJump to search
m
m
Line 23: Line 23:
 
<math>\mathbf{w}^{*}=\arg\max_w\sum_i\log p(y_i|x_i;\mathbf{w})-\frac{C}{2}\lVert\mathbf{w}\rVert^2</math>
 
<math>\mathbf{w}^{*}=\arg\max_w\sum_i\log p(y_i|x_i;\mathbf{w})-\frac{C}{2}\lVert\mathbf{w}\rVert^2</math>
  
The algorithm takes a written review as an input. First it assigns a POS tag to each word in the review to identify adjective or adverb phrases in the input review. They have used PMI-IR algorithm to estimate the semantic orientation of a phrase. The Pointwise Mutual Information (PMI) between two words <math> w_1 </math> and <math> w_2 </math> is defined as follow:
+
where <math>C</math> is the regularization parameter. The above equation has a convex dual which is derived in [[Lebanon and Lafferty (2001)]]. With dual variables <math>\alpha_{i,y}, and <math>\mathbf{\alpha}=\[\mathbf{\alpha}_1, \mathbf{\alpha}_2, \cdots, \mathbf{\alpha}_n], we define:
  
<math>
+
<math>Q(\mathbf{\alpha})</math>
PMI(w_1,w_2)=log_2(p(w_1\ and\ w_2)/p(w_1)p(w_2))
 
</math>
 
 
 
where <math> p(w_1,w_2) </math> is the probability that <math> w_1 </math> and <math> w_2 </math> co-occur. They have defined the semantic orientation of a phrase as follow:
 
 
 
<math>
 
SO(phrase)=PMI(phrase,'excellent')-PMI(phrase,'poor')
 
</math>
 
 
 
We can modify the above definition to obtain the following formula:
 
 
 
<math>
 
SO(phrase)=log_2(\frac{hits(phrase\ NEAR\ 'excellent')hits('excellent')}{hits(phrase\ NEAR\ 'poor')hits('excellent')} )
 
</math>
 
 
 
where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.
 
  
 
== Experimental Result ==
 
== Experimental Result ==

Revision as of 15:56, 24 September 2011

Exponentiated gradient algorithms for log-linear structured prediction, by A. Globerson, T. Y Koo, X. Carreras, M. Collins. In Proceedings of the 24th international conference on Machine learning, 2007.

This Paper is available online [1].

Under construction

Summary

This paper describes an exponentiated gradient (EG) algorithm for training conditional log-linear models. In this paper, they propose a fast and efficient algorithm for optimizing log-linear models such as CRFs.

The common practice of optimizing the conditional log likelihood of a CRF is often via conjugate-gradient or L-BFGS algorithms (Sha & Pereira, 2003), which typically would require at least one pass through the entire dataset before updating the weight vector. The author's approach here is an online algorithm based on exponentiated gradient updates (Kivinen & Warmuth, 1997).

Brief description of the method

Consider a supervised learning setting with objects and corresponding labels , which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function that maps pairs to feature vectors . Given a parameter vector , a conditional log-linear model defines a distribution over labels as:

where is a partition function.

The problem of learning from the training data is thus finding which maximizes the regularized log-likelihood:

where is the regularization parameter. The above equation has a convex dual which is derived in Lebanon and Lafferty (2001). With dual variables Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \alpha_{i,y}, and <math>\mathbf{\alpha}=\[\mathbf{\alpha}_1, \mathbf{\alpha}_2, \cdots, \mathbf{\alpha}_n], we define: <math>Q(\mathbf{\alpha})}

Experimental Result

This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.

Related papers

The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.

An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.