Ritter et al NAACL 2010. Unsupervised Modeling of Twitter Conversations

From Cohen Courses
Revision as of 18:31, 30 September 2012 by Norii (talk | contribs)
Jump to navigationJump to search

Citation

Alan Ritter, Colin Cherry, and Bill Dolan. Unsupervised Modeling of Twitter Conversations. In Proc of NAACL 2010

Online Version

Unsupervised Modeling of Twitter Conversations.

Summary

This Paper describes a topic model based approach to model dialogue acts. Whereas previous work has often required the manual construction of a dialogue act inventory, this paper proposes a series of unsupervised conversation models, where the discovery of acts amounts to clustering utterances with similar conversational roles. Specifically, the authors address this task using conversations on Twitter.

Brief description of the method

The authors propose 2 models to discover dialogue acts in an unsupervised manner.

Conversation Model

The base model, the Conversation model, is inspired by the content model proposed by Barzilay and Lee (2004) for multi-document summarization.

Ritter-naacl2010-cmodel.png

Here, each conversation is a sequence of dialogue acts , and each act produces a post, represented by a bag of words shown using the plates. The assumption is that each post in a Twitter conversation is generated by a single act.

Conversation + Topic Model

Since twitter conversations are not restricted to any particular topic, the Conversation Model tends to discover a mixture of dialogue and topic structure. In order to address this weakness, the authors propose an extended Conversation + Topic model.

Ritter-naacl2010-ctmodel.png

In this model, each word in a conversation is generated from one of three sources:

  1. The current post's dialogue act
  2. The conversation's topic
  3. General English

The model includes a conversation-specific word multinomial that represents the topic, as well as a universal general English multinomial . A new hidden variable, determines the source of each word, and is drawn from a conversation-specific distribution over sources .

The authors also propose a Bayesian version of the conversation model.

Experimental Result

Data: The dataset consists of about 1.3 million twitter conversations in a 2 month period during the summer of 2009, with each conversation containing between 2 and 243 posts. The dataset was formerly available at http://homes.cs.washington.edu/~aritter/twitter_chat/ (asked by Twitter to be taken down).

The authors evaluate the models with a qualitative visualization and an intrinsic conversation ordering task.

Qualitative Evaluation (Visualization)

The authors provide a visualization of the matrix of transition probabilities between dialogue acts:

Ritter-naacl2010-transitions.png

This transition diagram matches our intuition of what comprises a Twitter conversation. A conversation is initiated by:

  1. a Status act where a user broadcasts information about what they are doing.
  2. a Reference Broadcast act where a user broadcasts an interesting link or quote to their follower.
  3. a Question to Follower act where a user asks a question to their follower.

Word lists summarizing the discovered dialogue acts are shown below:

Ritter-naacl2010-wordlist.png


Quantitative Evaluation

The authors propose the following evaluation schema: For each conversation in the test set, generate all permutations of the posts. The trained model then calculates the probability of each permutation. Finally, Kendall's is used to measure the similarity of the max-probability permutation to the original order.

Ritter-naacl2010-eval.png

In general, the Bayesian Conversation model outperforms the Conversation+Topic model.


Discussion

Related Papers

The conversation model is inspired by Barzilay and Lee (2004).

Study Plan