Bollen 2011 vs Mishne 2006

From Cohen Courses
Revision as of 02:17, 6 November 2012 by Yubink (talk | contribs) (Created page with '== Papers Under Comparison == * Bollen 2011: Modeling Public Mood and Emotion: Twitter Sentiment and Socio-Economic Phenomena * Mishne 2006: [[Capturing Global Mood Levels us…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Papers Under Comparison

Comparative Analysis

Both papers seek to address a similar problem; that of global mood detection. This is differentiated from the usual task in sentiment analysis, which is to assign a sentiment to each document. In this problem, the documents in the corpora are gathered and the mood is determined over each aggregate as a whole, usually aggregated by some temporal unit.

Mishne 2006 attempts to solve the problem in a blog post setting using the LiveJournal dataset. Bollen 2011 on the other hand seek to solve a similar problem over microblogs, i.e. Twitter. Because Mishne 2006 is a full paper versus Bollen 2011, which is a poster, Mishne has much stronger evaluation. They use the mood labels tagged by the authors of the blog post as the gold standard and quantitative evaluation is performed. Bollen is limited to a qualitative glance at the results.

In terms of the methods used, Mishne uses a training corpus to identify discriminative terms and train a linear regression model with additional non-text features such as the hour of the day. Bollen on the other hand utilizes a psychometric instrument called the Profile of Mood States, which has been used in psychology for many decades. The extended version of POMS they use comes with a set of adjective features already defined for each of the 6 mood dimensions, and the method is limited to counting the occurrence of the matching terms in a given time slice. I believe the Mishne method may give better results, especially for Twitter, given that the method is domain-independent and the language in Twitter is considerably different from standard written English. However, Mishne requires a gold standard labelled training corpus which is more difficult in Twitter, as Twitter does not have an explicit way to tag "moods" like LiveJournal does. Hash tags or emoticons may be used here, but the quality may not compare to LiveJournal's mood boxes which are explicitly created for mood tagging.

Mishne and Bollen also treat moods slightly differently. Bollen, following the POMS method treat the 6 moods of POMS as different dimensions and create "mood vectors" from the scores of the time slice for each mood. On the other hand, Mishne does not confine themselves to a small set of moods. They instead train a linear model for each mood used in the LiveJournal dataset and evaluate their models to see how well they predict a given mood for a given time series of documents. It could be said that perhaps Bollen provides a better summary overview and while Mishne attempts to identify more nuanced moods ("cheerful", "loved", "thoughtful" etc).

Question Answers

  1. How much time did you spend reading the (new, non-wikified) paper you summarized?
    • 1 hour
  2. How much time did you spend reading the old wikified paper?
    • 30 mins
  3. How much time did you spend reading the summary of the old paper?
    • 5 mins
  4. How much time did you spend reading background materiel?
    • None
  5. Was there a study plan for the old paper?
    • No
  6. Give us any additional feedback you might have about this assignment.
    • The summary that I was given for the "already-done" paper was not of the best quality; it was very scant on details. For example, it did not mention what the paper used as the gold standard and baseline for their evaluations. In my case, the summary was not much better than reading the abstract of the paper. Summarizing papers are definitely useful, I feel, but only if it has sufficient detail and more importantly, an analysis of the paper's strengths and weaknesses.