Difference between revisions of "Predicting web searcher satisfaction with existing community-based answers"

From Cohen Courses
Jump to navigationJump to search
Line 62: Line 62:
 
* Composite Logisic Regression
 
* Composite Logisic Regression
 
A separate model is trained for each of he three subtasks mentioned above. The models are then combined  
 
A separate model is trained for each of he three subtasks mentioned above. The models are then combined  
*'''Classification Algorithm'''
+
 
The algorithm used to assign the sentiment label to test examples is a slight modification of the [[UsesMethod::K-Nearest_Neighbor|k-NN algorithm]].
+
== Dataset ==
 +
The click dataset from Aug 24,2010 to Aug 30,2010 is collected. The clicks issued from Google search search engine to one of the Yahoo! Answers link page as a result of a user query are collected. The dataset contains more than 37M clicks on 6M questions by 20M users following 20M queries.
 +
The dataset is sampled to obtain a smaller dataset of 614 clicked questions following 457 queries issued by at least 2 users.  
  
 
== Evaluation ==
 
== Evaluation ==

Revision as of 04:55, 6 November 2012

This a Paper reviewed for Social Media Analysis 10-802 in Fall 2012.

Citation

author    = {Qiaoling Liu and
              Eugene Agichtein and
              Gideon Dror and
              Evgeniy Gabrilovich and
              Yoelle Maarek and
              Dan Pelleg and
              Idan Szpektor},
 title     = {Predicting web searcher satisfaction with existing community-based
              answers},
 booktitle = {SIGIR},
 year      = {2011},
 pages     = {415-424},
 ee        = {http://doi.acm.org/10.1145/2009916.2009974},
 crossref  = {DBLP:conf/sigir/2011},
 bibsource = {DBLP, http://dblp.uni-trier.de}

Online Version

Predicting web searcher satisfaction with existing community-based answers

Summary

The paper proposes a solution to a novel problem of predicting and validating the usefulness of Community-based Question Answering (CQA) sites for an external web searcher rather than an asker belonging to a community. The work has looked at three major components in the pipeline of solving the satisfaction of users. They are as follows -

1. query clarity task - Whether a query is unambiguous enough to be interpreted as a question.

2. query-question match task - Measures the similarity between a query and a question.

3. answer quality - Assessing the sanctification of the answer with respect to the question in CQA, and thus indirectly relates to the satisfaction of the query.

The paper approaches the problem by building a regression model. The evaluation is performed by using human labeled data collected using crowdsourcing.

Methodoloy

Features

The features used for building the regression model has been divided according to the subtasks as mentioned above.

  • Query clarity features (9 total)
    • # of characters in the query.
    • # of words in the query.
    • # of clicks following the query.
    • Overall click entropy of the query.
    • User click entropy of the quer.
    • Query clarity score.
    • WH-type of the query - what,why,when,where,which,how,is,are,do.
  • Query-question match features(23 total)
    • Match score between te query and question title/body/answers using similarity metrics.
    • Jaccard/Dice/Tanimoto coefficient between the query and the question title.
    • Ratio between the number of characters/words in to the query to that in the question structure.
    • # of clicks on the question following this query.
    • # of users who clicked the question following thi/any query.
  • Answer quality features (37 total)
    • # of characters/words in the answer.
    • # of unique words in the answer.
    • # of answers received by the asker in the past.

For a full list of features please refer to the paper.

Models

  • Direct Logistic Regression

All the features listed above are using Logistic Regression as the regressor model.

  • Composite Logisic Regression

A separate model is trained for each of he three subtasks mentioned above. The models are then combined

Dataset

The click dataset from Aug 24,2010 to Aug 30,2010 is collected. The clicks issued from Google search search engine to one of the Yahoo! Answers link page as a result of a user query are collected. The dataset contains more than 37M clicks on 6M questions by 20M users following 20M queries. The dataset is sampled to obtain a smaller dataset of 614 clicked questions following 457 queries issued by at least 2 users.

Evaluation

Evaluation using cross-validation

The sentiment classification is evaluated using 10-fold cross-validation over the training set. The performance of the algorithm was tested under different feature settings. - Multi-class classification - There are 51 hashtag-based and 16 smiley based labels. The evaluation metric is the average f-score for 10-fold cross validation. The f-score for the random baseline is 0.02. The result is shown in the following table.

Multi.png

The result is significantly better than the random baseline.

- Binary classification The labels are 1 if the sentence contains a particular label or 0 if the sentence does not bear any sentiment. For each of the 50 hashtag-based and 15 smiley-based labels, the binary classification is performed. The result is as shown in the following table.

Bin.png

The results show that binary classification is better than the multi-class classification with a high precision value.

Evaluation with human judges

Amazon Mechanical Turk (AMT) services was used to evaluate the performance of the classifier on test data. Te evaluation was considered correct if one of the tags selected by a human judge for a sentence was one of the 5 tags predicted by the algorithm. The correlation score for this task was .

Observations

  • This work presents a supervised classification framework for which utilizes Twitter hashtags and smileys as proxies for different sentiment types as labels. It contributes to avoiding the need for labor intensive manual annotation, allowing identification and classification of diverse sentiment types of short texts.
  • Binary classification of sentiments yields better results than multi-class classification.
  • Punctuation, word and pattern features contributes more towards classification performance, as compared to a small marginal boost provided by the the n-gram features. Pattern features provides better performance as compared to the combined effect of the rest of the features.
  • Preliminary exploration on inter-sentiment overlap and dependency by two simple techniques of tag occurrence and feature overlap.
  • In addition, to the list of features used in the algorithm, features representing the short-term and long-term distance in the tweets could also be added.
  • The evaluation could also be performed on blog data other than the tweets to validate the usage of the semantic labels in other text documents.

Study Plan

Related Work

A similar work on extracting sentiment types on blogs was carried by McDonal et al (2007).