Difference between revisions of "Tackstrom and McDonald, ECIR 2011. Discovering fine-grained sentiment with latent variable structured prediction models"

From Cohen Courses
Jump to navigationJump to search
 
(4 intermediate revisions by the same user not shown)
Line 67: Line 67:
 
* Document sentiment labels were obtained by labeling one and two star reviews as negative (NEG), three star reviews as neutral (NEU), and four and five star reviews as positive (POS).
 
* Document sentiment labels were obtained by labeling one and two star reviews as negative (NEG), three star reviews as neutral (NEU), and four and five star reviews as positive (POS).
 
* The total number of sentences is about 1.5 million.
 
* The total number of sentences is about 1.5 million.
 +
 +
[[File:hcrf_results1.jpg]]
 +
 +
Tables 1 and 2 above show the distribution of sentence labels per category and distributions of labels in the documents respectively.
  
 
=== Results ===
 
=== Results ===
  
[[File:hcrf_results1.jpg]]
+
The authors compare the performance of their approach against the vote-flip algorithm, VoteFlip which uses a polarity lexicon, [[RelatedPaper::Wilson et al., 2007]] and one statistical state-of-the-art approach based on Document as Sentence (DaS)which trains a document classifier on the coarse-labeled training data, but applies it to sentences independently at test time.
  
 
[[File:hcrf_results2.jpg]]
 
[[File:hcrf_results2.jpg]]
 +
 +
Table 3 shows results for each model in terms of sentence and document accuracy as well as <math> F1 </math>-scores for each sentence sentiment category. HCRF performs significantly better when using a sufficiently large training data set in terms of sentence level accuracy. Adding more training data improves the accuracy of HCRF model for document-level accuracy.
  
 
== Related Papers ==
 
== Related Papers ==
 +
[1] [[RelatedPaper::Quattoni et al., 2007 | A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. 2007. Hidden conditional random
 +
fields. ''IEEE Transactions on Pattern Analysis and Machine Intelligence''.]]
 +
 +
[2] [[RelatedPaper::McDonald et al., 2007 | R. McDonald, K. Hannan, T. Neylon, M. Wells, and J. Reynar. 2007. Structured models for
 +
fine-to-coarse sentiment analysis. In ''Proc. ACL-2007''.]]
 +
 +
[3] [[RelatedPaper::Wilson et al., 2007 | T. Wilson, J. Wiebe, and P. Hoffmann. Recognizing contextual polarity in phrase-level sentiment analysis. 2005. In ''Proc. EMNLP-2005'']]

Latest revision as of 00:12, 29 November 2011

Citation

O. Tackstrom and R. McDonald. 2011. Discovering fine-grained sentiment with latent variable structured prediction models. In Proceedings of ECIR-2011, pp 764–773, Dublin, Ireland.

Online Version

Discovering fine-grained sentiment with latent variable structured prediction models

Summary

This paper investigates the use of latent variable structured prediction models for fine-grained sentiment analysis in the common situation where only coarse-grained supervision is available. The authors show how sentence level sentiment labels can be effectively learned from document-level supervision using hidden conditional random fields (HCRFs). The authors show improvements over both lexicon and existing machine learning based approaches. They focus on sentence level sentiment analysis.

Method

The authors observe that there is a lot of data in the form of coarse-level annotations available on the web pertaining to consumer reviews of products, movies etc. However, fine-grained labeled data for sentiment is difficult to obtain across domains for supervised learning. Hence, the authors model finer-level information as latent variables making use of the freely available coarse level annotations, using hierarchical graphical models such as HCRFs.

Based on the observations about positive and negative reviews in documents, the authors model sentence level classifications as:

  • Correlated with the observed document label and,
  • Flexible enough to disagree when contextual evidence suggests otherwise.


Approach

They start with the supervised fine-to-coarse sentiment model described in McDonald et al., 2007.

Let be a document consisting of sentences, Let the document level sentiment and sentence level sentiment be denoted by be the random variables that include the document level sentiment, , and the sequence of sentence level sentiment,


All random variables take values in for positive, negative and neutral sentiment, respectively. The authors hypothesize that there is a sequential relationship between sentence sentiment and that the document sentiment is influenced by all sentences (and vice versa). A first order Markov property is assumed, according to which each sentence variable, is independent of all other variables, conditioned on the document variable and its adjacent sentences, .

The graphical model for the following formulation is represented in the figure below:

Hcrf 1.jpg

In the figure above, a graphical model with latent sentence level states is shown. Dark grey nodes are observed variables and white nodes are unobserved. Light grey nodes are observed at training time. Dashed and dotted regions indicate the maximal cliques at position .

In the HCRF model above, the conditional probability of the observed variables is obtained by marginalizing over the posited hidden variables, given as,

As indicated in the figure above, there are two maximal cliques at each position . One involving only the sentence and its corresponding latent variable and one involving the consecutive latent variables and the document variable .

The assignment of the document variable is thus independent of the input , conditioned on the sequence of latent sentence variables . This distinction is important for learning predictive latent variables as it creates a bottleneck between the input sentences and the document label.

It was observed that while training HCRFs, using hard estimation gave slightly better performance as opposed to doing MAP estimate of the parameters with respect to the marginal conditional log-likelihood of observed variables, assuming a Normal prior distribution.

Assuming as the training set of document/document-label pairs, the parameter is estimated in the following manner for HCRFs:

Hcrf obj function.jpg

In , parameter can be estimated by using the stochastic gradient descent algorithm for 75 iterations with a fixed step size .

Viterbi algorithm is used in equation in the predicting the optimal assignment of in the same manner as used in conditional random fields.


Experiments and Results

The authors constructed a large balanced corpus of consumer reviews from a range of domains.

Dataset

  • A training set was created by sampling a total of 143,580 positive, negative and neutral reviews from five different domains: books, dvds, electronics, music and video games.
  • Document sentiment labels were obtained by labeling one and two star reviews as negative (NEG), three star reviews as neutral (NEU), and four and five star reviews as positive (POS).
  • The total number of sentences is about 1.5 million.

Hcrf results1.jpg

Tables 1 and 2 above show the distribution of sentence labels per category and distributions of labels in the documents respectively.

Results

The authors compare the performance of their approach against the vote-flip algorithm, VoteFlip which uses a polarity lexicon, Wilson et al., 2007 and one statistical state-of-the-art approach based on Document as Sentence (DaS)which trains a document classifier on the coarse-labeled training data, but applies it to sentences independently at test time.

Hcrf results2.jpg

Table 3 shows results for each model in terms of sentence and document accuracy as well as -scores for each sentence sentiment category. HCRF performs significantly better when using a sufficiently large training data set in terms of sentence level accuracy. Adding more training data improves the accuracy of HCRF model for document-level accuracy.

Related Papers

[1] A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. 2007. Hidden conditional random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2] R. McDonald, K. Hannan, T. Neylon, M. Wells, and J. Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Proc. ACL-2007.

[3] T. Wilson, J. Wiebe, and P. Hoffmann. Recognizing contextual polarity in phrase-level sentiment analysis. 2005. In Proc. EMNLP-2005