Cross-Lingual Mixture Model for Sentiment Classification, Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, Houfeng Wang, ACL 2012

From Cohen Courses
Revision as of 23:28, 30 September 2012 by Lingpenk (talk | contribs)
Jump to navigationJump to search

Citation

Cross-Lingual Mixture Model for Sentiment Classification, Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, Houfeng Wang, ACL 2012

Online version

An online pdf version is here[1]

Summary

Evaluation

The author evaluate CLMM's performance using [MPQA] and [NTCIR] in mainly two cases:

1) Keep the labeled data in target language (Chinese) unavailable.

A

2) Using the labeled target language (Chinese) data.

B

Discussion

The author provides an analysis (entropy estimates along with upper-bound numbers observed from experiments) and suggests that there can be interesting future work to explore the contextual information provided by the stimulus more effectively and further improve the response completion task.

Related papers

Ritter et. al 2010, Data-Driven Response Generation in Social Media

Regina Barzilay and Mirella Lapata. 2005, Modeling local coherence: An entity-based approach

Study plan

Language Model: [2]

Machine Translation, IBM Model-1 [3]

LDA [4]


Data Set