Difference between revisions of "DmitryDavidov et al. CoNLL"

From Cohen Courses
Jump to navigationJump to search
 
(23 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
== Summary ==
 
== Summary ==
  
This paper address the [[problem::Sentiment analysis]] problem on sentence level for multiple languages. They propose to leverage parallel corpora to learn a [[UsesMethod::Maximum Entropy model|MaxEnt]]-based [[UsesMethod::Expectation-maximization algorithm|EM]] model that consider both languages simultaneously under the assumption that sentiment labels for parallel sentences should be similar.  
+
It's '''not a self-contained''' paper, it '''depends on another paper''' heavily. It's '''not a creative work''', and I strongly suggest '''not to recommend''' to future students.
  
The experimented on 2 dataset: [[UsesDataset::Twitter Dataset for Sarcasm||Twitter Dataset]] and [[UsesDataset::Amazon Dataset for Sarcasm||Amazon Dataset]]
+
This paper address the [[AddressesProblem::Sarcasm Detection]] problem in Twitter and Amazon review posts. They propose to use some [[UsesMethod::semi-supervised learning]] methods to automatically generate patterns, and feed those patterns to some machine learning algorithm to detect sarcasm. However, from this paper, I have no idea how they used the unlabeled text, and they didn't provide any explanation about the classification algorithm they used, i.e. [[UsesMethod::k-Nearest Neighbor]].
 +
 
 +
They experimented on 2 dataset: [[UsesDataset::Twitter Dataset for Sarcasm|Twitter Dataset]] and [[UsesDataset::Amazon Dataset for Sarcasm|Amazon Dataset]]
  
 
== Evaluation ==
 
== Evaluation ==
  
This paper compared their method with other 3 kind of state-of-the-art baseline algorithms.
+
In this paper, it proposed several feature extraction methods and a data enrichment method. In the evaluation part, it mainly compared the performance between those methods.
  1. The first kind of baseline algorithms are training separate classifiers on different languages. For this kind, the authors used [[Maximum Entropy model|MaxEnt]], [[SVM]] and Monolingual [[transductive SVM|TSVM]]
+
Moreover, the authors used two settings to test the robustness, one is traditional in-domain cross validation and the other is cross domain test. It reported promising results on both settings.
  2. The second kind of baseline is Bilingual [[transductive SVM|TSVM]]
 
  3. The third kind is semi-supervised learning strategy [[Co-training]]
 
  
 
== Discussion ==
 
== Discussion ==
This paper addresses the problem of bilingual sentiment classification. It leverages some parallel corpus, or pseudo-parallel corpus which is generated from automatic translation software like Google Translate, to build a MaxEnt model that maximize the joint probability p(y1, y2|x1, x2; w1, w2) under the assumption that the same idea expressed by different languages should have similar polarity.  
+
First of all, I have to say it's '''not a self-contained''' paper, it '''depends on another paper''' heavily and it's '''not a creative work'''. This paper didn't change much from the  [[Tsur_et_al_ICWSM_10|AAAI 2010 paper]]. The only thing that this paper did is changed some small setting of previous paper: the algorithm follows [[Tsur_et_al_ICWSM_10|AAAI 2010 paper]], the feature follows the ACL 2006 paper [http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCYQFjAA&url=http%3A%2F%2Fleibniz.cs.huji.ac.il%2Ftr%2F884.pdf&ei=f4doUN7hOq-O0QGYv4CQCg&usg=AFQjCNHmMVwq0zPYDEhpaScToMm5iVNO0A&sig2=jp-5-01q-OzlAY3AbIhntQ]
  
The strong points of the paper includes:
+
The weak points of the paper includes:
   1. It maximizes the joint probability so that the model can consider different languages simultaneously and will not biased to one language.
+
   1. It haven't any significant change to previous methods
    Moreover, it takes the translation quality into consideration so that it will not be severely damaged by poor translation quality and can leverage some pseudo-parallel corpus.
+
  2. It depended on another paper so heavy that the algorithm is not complete without that paper.
   2. It takes EM algorithm to leverage more unlabeled parallel data, which is much more earlier to get.
+
   3. It didn't consider any baseline algorithms. For example, they can compare their method to other semi-supervised methods or related sarcasm detection methods.
  
The weak point of the paper includes:
+
Strong point:
   1. The baseline algorithms is too weak. It mostly compares their algorithm with some algorithm that didn't take special consideration about this configuration, so it's not surprising       
+
   1. It's among some first attempt to solve this problem using semi-supervised learning method.
    that the proposed algorithm can out-perform the baselines.
 
  2. There is a limitation caused by translation. Current translation algorithms can barely give meaningful translation for documents, and parallel corpus on document level is also rare.
 
    This make this algorithm hard to go above the sentence level.
 
  
 
== Related papers ==
 
== Related papers ==
In sense of multilingual sentiment analysis, there several works like:
+
* Paper:Icwsm - a great catchy name: Semi-supervised recognition of sarcastic sentences in product reviews:[http://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/download/1495/1851]
* Paper:Learning multilingual subjective language via cross-lingual projections:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCkQFjAA&url=http%3A%2F%2Fwww.cse.unt.edu%2F~rada%2Fpapers%2Fmihalcea.acl07.pdf&ei=ya9jUPjoO-Ss0AGclYCADQ&usg=AFQjCNGOAgFeF9JuXeLU2fRm8ufgngIo9A&sig2=SA46HbDTWKz-Za2cJF4gQA]
+
* Paper:Efficient unsupervised discovery o word categories using symmetric patterns and high frequency words:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCYQFjAA&url=http%3A%2F%2Fleibniz.cs.huji.ac.il%2Ftr%2F884.pdf&ei=f4doUN7hOq-O0QGYv4CQCg&usg=AFQjCNHmMVwq0zPYDEhpaScToMm5iVNO0A&sig2=jp-5-01q-OzlAY3AbIhntQ]
* Paper:Multilingual subjectivity: Are more languages better?:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCkQFjAA&url=http%3A%2F%2Fwww.aclweb.org%2Fanthology%2FC10-1004&ei=ZrBjUJjNFZG50QHK6YCoAw&usg=AFQjCNHRCsrDKNxJNqCTTBeD1QwbmYy-jA&sig2=xmKv7ju_wTZfvk3uDBN4NQ]
+
* Paper:Automatic satire detection: Are you having a laugh?:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCYQFjAA&url=http%3A%2F%2Fwww.aclweb.org%2Fanthology-new%2FP%2FP09%2FP09-2041.pdf&ei=J4hoUKLwDqjq0gHmooHAAw&usg=AFQjCNFcfaQBaoIczy8ACgzt3Mwkl71IvQ&sig2=9BVbppWWro_T8PoED1GBPg]
* Paper:Cross-language text classification using structural correspondence learning.:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCsQFjAA&url=http%3A%2F%2Fwww.aclweb.org%2Fanthology%2FP10-1114&ei=w7BjUJTeCIWX0QHDzoCYAw&usg=AFQjCNGhsIbjWrUFxl2tbBmV62jU5xVEIg&sig2=W302oRTAKZJk07-6XEkFzg]
 
  
In sense of semi-supervised learning, related papers include:
+
== Study plan ==
* Paper:Combining labeled and unlabeled data with co-training:[http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/cotraining.pdf]
+
As a typical incremental work, the original works includes:
* Paper:Text classification from labeled and unlabeled documents using EM.:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCsQFjAA&url=http%3A%2F%2Fwww.kamalnigam.com%2Fpapers%2Femcat-mlj99.pdf&ei=e7FjUNKnArDD0AG40IGIDA&usg=AFQjCNG6Xo2O3_FDjavdaShiNCl1Fb84SA&sig2=gyqDvYY8Xb--CCsSJ-vPsA]
+
* Paper:Icwsm - a great catchy name: Semi-supervised recognition of sarcastic sentences in product reviews:[http://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/download/1495/1851]
 +
* Paper:Efficient unsupervised discovery o word categories using symmetric patterns and high frequency words:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCYQFjAA&url=http%3A%2F%2Fleibniz.cs.huji.ac.il%2Ftr%2F884.pdf&ei=f4doUN7hOq-O0QGYv4CQCg&usg=AFQjCNHmMVwq0zPYDEhpaScToMm5iVNO0A&sig2=jp-5-01q-OzlAY3AbIhntQ]
  
== Study plan ==
+
And the classification algorithm used:
* Article:Expectation Maximization Algorithm:[[Expectation-maximization algorithm]]
+
* Article: k-Nearest Neighbor:[[UsesMethod::k-Nearest Neighbor]]
* Article:Maximum Entropy Model:[[Maximum Entropy model]]
 
* Paper:Combining labeled and unlabeled data with co-training:[http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/cotraining.pdf]
 
* Paper:Learning multilingual subjective language via cross-lingual projections:[http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCkQFjAA&url=http%3A%2F%2Fwww.cse.unt.edu%2F~rada%2Fpapers%2Fmihalcea.acl07.pdf&ei=ya9jUPjoO-Ss0AGclYCADQ&usg=AFQjCNGOAgFeF9JuXeLU2fRm8ufgngIo9A&sig2=SA46HbDTWKz-Za2cJF4gQA]
 

Latest revision as of 14:34, 2 October 2012

Citation

Semi-supervised recognition of sarcastic sentences in twitter and amazon,

Dmitry Davidov, Oren Tsur and Ari Rappoport, CoNLL 2010

Online version

Semi-supervised recognition of sarcastic sentences in twitter and amazon

Summary

It's not a self-contained paper, it depends on another paper heavily. It's not a creative work, and I strongly suggest not to recommend to future students.

This paper address the Sarcasm Detection problem in Twitter and Amazon review posts. They propose to use some semi-supervised learning methods to automatically generate patterns, and feed those patterns to some machine learning algorithm to detect sarcasm. However, from this paper, I have no idea how they used the unlabeled text, and they didn't provide any explanation about the classification algorithm they used, i.e. k-Nearest Neighbor.

They experimented on 2 dataset: Twitter Dataset and Amazon Dataset

Evaluation

In this paper, it proposed several feature extraction methods and a data enrichment method. In the evaluation part, it mainly compared the performance between those methods. Moreover, the authors used two settings to test the robustness, one is traditional in-domain cross validation and the other is cross domain test. It reported promising results on both settings.

Discussion

First of all, I have to say it's not a self-contained paper, it depends on another paper heavily and it's not a creative work. This paper didn't change much from the AAAI 2010 paper. The only thing that this paper did is changed some small setting of previous paper: the algorithm follows AAAI 2010 paper, the feature follows the ACL 2006 paper [1]

The weak points of the paper includes:

 1. It haven't any significant change to previous methods
 2. It depended on another paper so heavy that the algorithm is not complete without that paper.
 3. It didn't consider any baseline algorithms. For example, they can compare their method to other semi-supervised methods or related sarcasm detection methods.

Strong point:

 1. It's among some first attempt to solve this problem using semi-supervised learning method.

Related papers

  • Paper:Icwsm - a great catchy name: Semi-supervised recognition of sarcastic sentences in product reviews:[2]
  • Paper:Efficient unsupervised discovery o word categories using symmetric patterns and high frequency words:[3]
  • Paper:Automatic satire detection: Are you having a laugh?:[4]

Study plan

As a typical incremental work, the original works includes:

  • Paper:Icwsm - a great catchy name: Semi-supervised recognition of sarcastic sentences in product reviews:[5]
  • Paper:Efficient unsupervised discovery o word categories using symmetric patterns and high frequency words:[6]

And the classification algorithm used: