Difference between revisions of "Li et al IJCAI 11"

From Cohen Courses
Jump to navigationJump to search
 
(4 intermediate revisions by the same user not shown)
Line 15: Line 15:
 
== Summary ==
 
== Summary ==
  
This [[Category::Paper]] is using [[UsesMethod::Tensor Analysis]] in order to do  [[AddressesProblem::Review classification]] in cases where a word in a review might have different sentiment coloring, depending on the particular reviewer. To that end, the authors model the problem as a three dimensional Tensor, where the three dimensions dimensions correspond to a reviewer, a product and a term respectively, thus incorporating additional information, about which specific reviewer wrote what,  to the traditional Bag of Words model.
+
This [[Category::Paper]] is using [[UsesMethod::Tensor]] Analysis in order to do  [[AddressesProblem::Review classification]] in cases where a word in a review might have different sentiment coloring, depending on the particular reviewer. To that end, the authors model the problem as a three dimensional Tensor, where the three dimensions dimensions correspond to a reviewer, a product and a term respectively, thus incorporating additional information, about which specific reviewer wrote what,  to the traditional Bag of Words model.
  
Another point to note is that the authors are not doing Binary Classification to the reviews, but wish to rate each review on a scale from 1-5 (similar to what Amazon does for example). In order to come up with a rating scheme, the authors use a [[UsesMethod::Linear Regression]] function. Based on that function, they derive a decomposition of the reviewer-by-product-by-term tensor into three, compact factor matrices (each of which corresponds to the respective dimension), and using those matrices, they are able to infer the missing rating scores.
+
Another point to note is that the authors are not doing Binary Classification to the reviews, but wish to rate each review on a scale from 1-5 (similar to what Amazon does for example). In order to come up with a rating scheme, the authors use a [[UsesMethod::Linear Regression]] function. Based on that function, they derive a decomposition of the reviewer-by-product-by-term tensor into three, compact factor matrices (each of which corresponds to the respective dimension). This decomposition resembles the [[UsesMethod::PARAFAC]] decomposition, albeit differentiating slightly on the optimization function used.
 +
 
 +
After obtaining this low rank tensor decomposition, the authors are able, by reconstructing the tensor, to fill missing entries on it, thus performing the review rating prediction.
  
 
== Evaluation ==
 
== Evaluation ==
 +
The authors evaluate their proposed method with respect to the following different dimensions:
 +
 +
* Performance of classifier, where they assess the accuracy of their classifier, in comparison to some baseline approaches (see below)
 +
* Product popularity measurement, where they assess whether the method works better for popular or unpopular products.
 +
* Matrix density measurement, where they assess the influence of the tensor's density to the quality of the results.
  
 
'''Datasets''':
 
'''Datasets''':
Line 44: Line 51:
 
For comparison with current state of the art approaches, the authors use the following baselines:
 
For comparison with current state of the art approaches, the authors use the following baselines:
  
 +
* RANDOM: Assign a random rating
 +
 +
* Majority: Pick the majority rating score in the training set to the reviews in test set.
 +
 +
* PSP (Positive Sentence Perentage) model, from Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2:1–135, January 2008.
 +
The authors used two different learners to implement this baseline, namely [[UsesMethod::Linear Regression]] (abbreviated as "Reg" on the results) and [[UsesMethod::SVM]] classification
  
 +
* A Matrix Factorization [[UsesMethod::Collaborative Filtering]] approach introduced in Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42:30–37, August 2009.
  
 +
 
'''Results''':
 
'''Results''':
 +
In the table below, we show the accuracy results of the algorithm, as compared to the baselines:
 +
 +
[[File:Li IJCAI11 accuracy.png]]
  
 +
The proposed approach, in the aforementioned datasets beats all of the baselines.
 
== Related Papers ==  
 
== Related Papers ==  
  
* [[RelatedPaper:: ]]
+
* [[RelatedPaper:: Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2:1–135, January 2008.]]
 +
* [[RelatedPaper::Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42:30–37, August 2009. ]]
 +
 
 +
== Recommendation for whether or not to assign the paper as required/optional reading in later classes. ==
 +
 
 +
Even though this paper is one of the few (if any) that use [[UsesMethod::Tensor]] analysis for sentiment classification, I believe that it is not very detailed and too application specific (tailored to the reviewer-product-term context) to be assigned as a "must-read" paper of the area.
  
 
== Study Plan ==
 
== Study Plan ==
  
Papers/articles/blogs/videos you may want to read to understand this paper.
+
This paper is fairly straightforward. One might need to have a basic understanding of what a Tensor is, but overall, one can read the paper without any special background needed, except maybe for the two related papers listed above.
 +
 
 +
The paper, also, resembles some works that do matrix/tensor completion, e.g. for movie recommendation, Netflix prize etc, but no heavy background on this subject is required for reading this work either.

Latest revision as of 04:34, 27 September 2012

This a Paper that appeared at the International joint conference on Artificial Intelligence 2011

Citation

Incorporating reviewer and product information for review rating prediction Li, F. and Liu, N. and Jin, H. and Zhao, K. and Yang, Q. and Zhu, X. Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Three pages 1820--1825 year 2011

Online version

Incorporating reviewer and product information for review rating prediction

Summary

This Paper is using Tensor Analysis in order to do Review classification in cases where a word in a review might have different sentiment coloring, depending on the particular reviewer. To that end, the authors model the problem as a three dimensional Tensor, where the three dimensions dimensions correspond to a reviewer, a product and a term respectively, thus incorporating additional information, about which specific reviewer wrote what, to the traditional Bag of Words model.

Another point to note is that the authors are not doing Binary Classification to the reviews, but wish to rate each review on a scale from 1-5 (similar to what Amazon does for example). In order to come up with a rating scheme, the authors use a Linear Regression function. Based on that function, they derive a decomposition of the reviewer-by-product-by-term tensor into three, compact factor matrices (each of which corresponds to the respective dimension). This decomposition resembles the PARAFAC decomposition, albeit differentiating slightly on the optimization function used.

After obtaining this low rank tensor decomposition, the authors are able, by reconstructing the tensor, to fill missing entries on it, thus performing the review rating prediction.

Evaluation

The authors evaluate their proposed method with respect to the following different dimensions:

  • Performance of classifier, where they assess the accuracy of their classifier, in comparison to some baseline approaches (see below)
  • Product popularity measurement, where they assess whether the method works better for popular or unpopular products.
  • Matrix density measurement, where they assess the influence of the tensor's density to the quality of the results.

Datasets:

For evaluation, the authors conduct experiments on two different, real datasets:

A description of the data is shown in the table below:

Li IJCAI11 datasets.png

Metrics: The authors use the following measures in order to evaluate the performance of their approach:

  • Mean Absolute Error (MAE)
  • Root Mean Squared Error (RMSE)

Li IJCAI11 metrics.jpg

Baselines:

For comparison with current state of the art approaches, the authors use the following baselines:

  • RANDOM: Assign a random rating
  • Majority: Pick the majority rating score in the training set to the reviews in test set.
  • PSP (Positive Sentence Perentage) model, from Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2:1–135, January 2008.

The authors used two different learners to implement this baseline, namely Linear Regression (abbreviated as "Reg" on the results) and SVM classification

  • A Matrix Factorization Collaborative Filtering approach introduced in Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42:30–37, August 2009.


Results: In the table below, we show the accuracy results of the algorithm, as compared to the baselines:

Li IJCAI11 accuracy.png

The proposed approach, in the aforementioned datasets beats all of the baselines.

Related Papers

Recommendation for whether or not to assign the paper as required/optional reading in later classes.

Even though this paper is one of the few (if any) that use Tensor analysis for sentiment classification, I believe that it is not very detailed and too application specific (tailored to the reviewer-product-term context) to be assigned as a "must-read" paper of the area.

Study Plan

This paper is fairly straightforward. One might need to have a basic understanding of what a Tensor is, but overall, one can read the paper without any special background needed, except maybe for the two related papers listed above.

The paper, also, resembles some works that do matrix/tensor completion, e.g. for movie recommendation, Netflix prize etc, but no heavy background on this subject is required for reading this work either.