Difference between revisions of "Tackstrom and McDonald, ECIR 2011. Discovering fine-grained sentiment with latent variable structured prediction models"

From Cohen Courses
Jump to navigationJump to search
Line 55: Line 55:
 
[[File:hcrf_obj_function.jpg]]
 
[[File:hcrf_obj_function.jpg]]
  
In <math> (1) </math>, parameter <\theta> can be estimated by using the [[UsesMethod::Stochastic Gradient Descent | stochastic gradient descent]] algorithm for 75 iterations with a fixed step size <math> \eta </math>.
+
In <math> (1) </math>, parameter <math> \theta </math> can be estimated by using the [[UsesMethod::Stochastic Gradient Descent | stochastic gradient descent]] algorithm for 75 iterations with a fixed step size <math> \eta </math>.
 +
 
 +
[[UsesMethod::Viterbi]] algorithm is used in equation <math> (2) </math> in the predicting the optimal assignment of <math> (y^{d}, \textbf {y}^{s}) </math>.
  
 
== Experiments and Results ==
 
== Experiments and Results ==

Revision as of 23:23, 28 November 2011

Citation

O. Tackstrom and R. McDonald. 2011. Discovering fine-grained sentiment with latent variable structured prediction models. In Proceedings of ECIR-2011, pp 764–773, Dublin, Ireland.

Online Version

Discovering fine-grained sentiment with latent variable structured prediction models

Summary

This paper investigates the use of latent variable structured prediction models for fine-grained sentiment analysis in the common situation where only coarse-grained supervision is available. The authors show how sentence level sentiment labels can be effectively learned from document-level supervision using hidden conditional random fields (HCRFs). The authors show improvements over both lexicon and existing machine learning based approaches. They focus on sentence level sentiment analysis.

Method

The authors observe that there is a lot of data in the form of coarse-level annotations available on the web pertaining to consumer reviews of products, movies etc. However, fine-grained labeled data for sentiment is difficult to obtain across domains for supervised learning. Hence, the authors model finer-level information as latent variables making use of the freely available coarse level annotations, using hierarchical graphical models such as HCRFs.

Based on the observations about positive and negative reviews in documents, the authors model sentence level classifications as:

  • Correlated with the observed document label and,
  • Flexible enough to disagree when contextual evidence suggests otherwise.


Approach

They start with the supervised fine-to-coarse sentiment model described in McDonald et al., 2007.

Let be a document consisting of sentences, Let the document level sentiment and sentence level sentiment be denoted by be the random variables that include the document level sentiment, , and the sequence of sentence level sentiment,


All random variables take values in for positive, negative and neutral sentiment, respectively. The authors hypothesize that there is a sequential relationship between sentence sentiment and that the document sentiment is influenced by all sentences (and vice versa). A first order Markov property is assumed, according to which each sentence variable, is independent of all other variables, conditioned on the document variable and its adjacent sentences, .

The graphical model for the following formulation is represented in the figure below:

Hcrf 1.jpg

In the figure above, a graphical model with latent sentence level states is shown. Dark grey nodes are observed variables and white nodes are unobserved. Light grey nodes are observed at training time. Dashed and dotted regions indicate the maximal cliques at position .

In the HCRF model above, the conditional probability of the observed variables is obtained by marginalizing over the posited hidden variables, given as,

As indicated in the figure above, there are two maximal cliques at each position . One involving only the sentence and its corresponding latent variable and one involving the consecutive latent variables and the document variable .

The assignment of the document variable is thus independent of the input , conditioned on the sequence of latent sentence variables . This distinction is important for learning predictive latent variables as it creates a bottleneck between the input sentences and the document label.

It was observed that while training HCRFs, using hard estimation gave slightly better performance as opposed to doing MAP estimate of the parameters with respect to the marginal conditional log-likelihood of observed variables, assuming a Normal prior distribution.

Assuming as the training set of document/document-label pairs, the parameter is estimated in the following manner for HCRFs:

Hcrf obj function.jpg

In , parameter can be estimated by using the stochastic gradient descent algorithm for 75 iterations with a fixed step size .

Viterbi algorithm is used in equation in the predicting the optimal assignment of .

Experiments and Results

Datasets

Evaluation Metric

Results

Related Papers