Difference between revisions of "Modeling Relational Events via Latent Classes"

From Cohen Courses
Jump to navigationJump to search
Line 54: Line 54:
 
* And predict from the graphical model
 
* And predict from the graphical model
  
* Is Computational Linguistics Becoming More Applied?
+
[[File:Kdd2010_result.png]]
 
 
Look at trends over time for some applications such as ''Machine Translation, Spelling Correction, Dialogue Systems'' etc and found there is a clear trend toward an increase in applications over time.
 
 
 
 
 
* Differences and Similarities Among COLING, ACL and EMNLP
 
 
 
Inferred from the topic entropy, COLING has been historically the broadest of the three conferences; ACL started with a fairly narrow focus, became nearly as broad as COLING during the 1990's but become more narrow again in recent years; EMNLP shows being its status as a "special interest" conference.
 
 
 
From the JS divergence, they showed all of the three conferences are converging to their topics.
 
 
 
== Related papers ==
 
 
 
[[RelatedPaper::Blei and Lafferty, ICML2006]]: David Blei and John D. Lafferty. 2006. Dynamic topic models. ICML.
 
 
 
[[RelatedPaper::Wang and McCallum, KDD2006]]: Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-Markov continuous-time model of topical trends. In KDD, pages 424–433, New York, NY, USA. ACM.
 

Revision as of 04:27, 5 February 2011

Citation

Christopher DuBois, Padhraic Smyth. Modeling Relational Events via Latent Classes. KDD 2010

Online version

download here

Summary

Many social network activities can be described as a series of dyadic events. An event in this paper is defined as a triple of (sender, receiver, event_type). Authors assume that such events are generated by some latent class and in the paper they proposed a graphical model to identify the latent class as well as dyadic events with the inference implementation of Gibbs sampling and Expectation-Maximization methods.

Methodology

It's assumed that relational events are generated by following process:

  • Draw the class distribution ~ Dirichlet()
  • Draw distributions:

~ Dirichlet()

~ Dirichlet()

~ Dirichlet()

for all c in {1...C}

  • For each event

(a) Draw ~ Multinomial(), the event’s class

(b) Draw ~ Multinomial(), the event’s sender

(c) Draw ~ Multinomial(), the event’s receiver

(d) Draw ~ Multinomial(), the event’s type

Kdd2010 gm.png

It's not hard to work out the likelihood for the data:

Kdd2010 joint.png

Two ways of inference, Gibbs sampling and EM, are implemented in this paper.

Data

A data set of international events involving entities from 450 countries over the 2000-2005 time period. This data has been used by political scientists to explore international relations and policy. The authors used an automated system for coding 3,575,897 events from Reuters news reports. Each of these events takes the form: [entity A] [action] [entity B]. Actions in this data set consist of 247 possible types, such as judicial action, military action, and so forth.

A quick example of what the output might be like:

Kdd2010 ex.png

Experimental Result

  • Uniform baseline: simply predict all events are equally likely
  • Multinomial baseline: predict by observed frequency for each event
  • And predict from the graphical model

Kdd2010 result.png