Ling and He Joint Sentiment Topic Model for Sentiment Analysis

From Cohen Courses
Revision as of 03:54, 2 October 2012 by Nkatariy (talk | contribs)
Jump to navigationJump to search

Citation

author = {Lin, Chenghua and He, Yulan},
title = {Joint sentiment/topic model for sentiment analysis},
booktitle = {Proceedings of the 18th ACM conference on Information and knowledge management},
series = {CIKM '09},
year = {2009},
isbn = {978-1-60558-512-3},
location = {Hong Kong, China},
pages = {375--384},
numpages = {10},
url = {http://doi.acm.org/10.1145/1645953.1646003},
doi = {10.1145/1645953.1646003},
acmid = {1646003},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {joint sentiment/topic model, latent dirichlet allocation, opinion mining, sentiment analysis}

Online Version

Joint Sentiment/Topic Model for Sentiment Analysis

Summary

This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA) which detects sentiment and topics simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed model is fully unsupervised.

Each document in the Joint Sentiment Topic (JST) model is associated with (number of sentiment labels) topic-document distributions, each of which corresponds to a sentiment label with the same number of topics. Finally, one draws a word from distribution over words defined by the topic and sentiment label.

 1. For each document , choose a distribution  ~ Dirichlet()
 2. For each sentiment label  under document , choose a distribution  ∼ Dir().
 3. For each word  in document 
   a. Choose a sentiment label 
   b. Choose a topic  ~ 
   c. Choose a word  from the distribution over words defined by the topic  and sentiment label  

Inference

Gibbs sampling algorithm is provided for estimating the posterior distribution of the latent variables given a document.

Tying-JST model

One has to choose a topic-document distribution for every document under the JST model, whereas in tying-JST there is only one topic-document distribution which accounts for all the documents in the corpus.

Experiments

The authors used a corpus of preprocessed movie reviews for evaluating the performance of the JST model.

Study Plan