Ritter et al NAACL 2010. Unsupervised Modeling of Twitter Conversations

From Cohen Courses
Jump to navigationJump to search

Citation

Alan Ritter, Colin Cherry, and Bill Dolan. Unsupervised Modeling of Twitter Conversations. In Proc of NAACL 2010

Online Version

Unsupervised Modeling of Twitter Conversations.

Summary

This Paper describes a topic model based approach to model dialogue acts. Whereas previous work has often required manually constructing a dialogue act inventory, this paper proposes a series of unsupervised conversation models. In essence, the model clusters utterances with similar conversational roles, and each cluster is seen as one specific type of dialogue act. Specifically, the authors address this task using conversations on Twitter.

Brief description of the method

The authors propose 2 models to discover dialogue acts in an unsupervised manner.

Conversation Model

The base model, the Conversation model, is inspired by the content model proposed by Barzilay and Lee (2004) for multi-document summarization.

Ritter-naacl2010-cmodel.png

Here, each conversation is a sequence of dialogue acts , and each act produces a post, represented by a bag of words shown using the plates. The assumption is that each post in a Twitter conversation is generated by a single act.

Conversation + Topic Model

Since twitter conversations can be basically about anything (be of any topic), the Conversation Model tends to discover a mixture of dialogue and topic structure, which is unfavorable for the task in question. For example, instead of discovering dialogue acts, the Conversation Model tends to discover topics such as food, computers, and music; the model then predicts that a post about food will transition to food with a high probability. In order to address this weakness, the authors propose an extended Conversation + Topic model, as shown below:

Ritter-naacl2010-ctmodel.png

In this model, each word in a conversation is generated from one of three sources:

  1. The current post's dialogue act
  2. The conversation's topic
  3. General English

The model includes a conversation-specific word multinomial that represents the topic, as well as a universal general English multinomial . A new hidden variable, determines the source of each word, and is drawn from a conversation-specific distribution over sources .

The authors also propose a Bayesian version of the conversation model.

Experimental Result

Data: The dataset consists of about 1.3 million twitter conversations in a 2 month period during the summer of 2009, with each conversation containing between 2 and 243 posts. The dataset was formerly available at http://homes.cs.washington.edu/~aritter/twitter_chat/ (asked by Twitter to be taken down).

The authors evaluate the models with a qualitative visualization and an intrinsic conversation ordering task.

Qualitative Evaluation (Visualization)

The authors provide a visualization of the matrix of transition probabilities between dialogue acts:

Ritter-naacl2010-transitions.png

This transition diagram matches our intuition of what comprises a Twitter conversation. A conversation is initiated by:

  1. a Status act where a user broadcasts information about what they are doing (Example: "it's 33C out and macbook air temp keeps 37C, I'm not able to work")
  2. a Reference Broadcast act where a user broadcasts an interesting link or quote to their follower (Example: "60 million parameters and 650,000 neurons woh. Neural Networks officially best at object recognition http://j.mp/SJ0GTG -- news.yc Popular")
  3. a Question to Follower act where a user asks a question to their follower (Example: "What is the difference between these two elevator buttons?? pic.twitter.com/TzTaDIMr")

Word lists summarizing the discovered dialogue acts are shown below:

Ritter-naacl2010-wordlist.png


Quantitative Evaluation

The authors propose the following evaluation schema: For each conversation in the test set, generate all permutations of the posts. Then calculate the probability of each permutation (using your model). Finally, Kendall's is used to measure the similarity of the max-probability permutation to the original order.

Ritter-naacl2010-eval.png

In general, the Bayesian Conversation model outperforms the Conversation+Topic model, and the Conversation + Topic model outperforms the Conversation model.

Discussion

The paper proposes an unsupervised approach to the task of unsupervised dialogue act tagging. Specifically, the authors extend the conversation model in order to separate topic and dialogue words. The extended model discovers interpretable set of dialogue acts.

The authors also introduce conversation ordering as a measure of conversation model quality.

Related Papers

The conversation model is inspired by the content model that appears in Barzilay and Lee (2004).

Study Plan

This paper assumes prior knowledge of topic models. For the basics about topic models, refer to the Study Plans on Yano et al NAACL 2009.

  • Content model
    • Regina Barzilay and Lillian Lee, "Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization" In Proc of HLT-NAACL 2004 pdf
  • Slice sampling
  • Chib-style estimation
    • Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. "Evaluation Methods for Topic Models" In ICML 2009 pdf
    • Iain Murray and Ruslan Salakhutdinov, Evaluating probabilities under high-dimensional latent variable models In NIPS 2009 pdf