Difference between revisions of "Modeling Relational Events via Latent Classes"

From Cohen Courses
Jump to navigationJump to search
Line 32: Line 32:
 
(c) Draw <math>r|c</math> ~ Multinomial(<math>\bar{\phi_c}</math>), the event’s receiver
 
(c) Draw <math>r|c</math> ~ Multinomial(<math>\bar{\phi_c}</math>), the event’s receiver
  
(d) Draw <math>a|c</math> ~ Multinomial(<math>\bar{\psi_c}</math>), the event’s typ
+
(d) Draw <math>a|c</math> ~ Multinomial(<math>\bar{\psi_c}</math>), the event’s type
  
LDA does not explicitly model the temporal relationship. There are two other common ways to capture the temporal information: the Dynamic Topic Model (Blei and Lafferty, 2006), representing each years' documents as generated from a normal distribution centroid over topics, with the following year's centroid generated from the preceding year's; the Topics over Time Model (Wang and McCallum, 2006), assuming that each document chooses its own time stamp based on a topic-specific beta distribution. But both of these models impose constraints on the time periods: the Dynamic Topic Model penalizes large changes from year to year while the beta distribution in Topics over Time Model are relatively inflexible.
+
[[File:Kdd2010_gm.png]]
  
So in this paper the authors first apply the LDA (implemented in [http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation Gibbs Sampling]) at each year. Then they perform [[UsesMethod::post hoc]] calculations based on the observed probability of each topic given the current year. Define <math>\hat{p}(z|y)</math> as the empirical probability that an arbitrary paper <math>d</math> written in year <math>y</math> was about topic <math>z</math>:
+
It's not hard to work out the likelihood for the data:
  
<math>\hat{p}(z|y) = \frac{1}{C}\sum_{d:t_d = y} \sum_{z'_i \in d} I(z'_i = z)</math>
+
[[File:Kdd2010_joint.png]]
 
 
where <math>I</math> is the indicator function, <math>t_d</math> is the data document <math>d</math> was written, <math>\hat{p}(z|y)</math> is set to a constant <math>1/C</math>.
 
 
 
 
 
Define <math>\hat{p}(z|c)</math> as the empirical distribution of a topic <math>z</math> at a conference <math>c</math>:
 
 
 
<math>\hat{p}(z|c) = \frac{1}{C}\sum_{d:c_d = c} \sum_{z'_i \in d} I(z'_i = z)</math>.
 
 
 
 
 
Define [[UsesMethod::topic entropy]] to measure the breadth of a conference:
 
 
 
<math>H(z|c,y) = - \sum_{i=1}^{K} \hat{p}(z_i|c,y)log \hat{p}(z_i|c,y)</math>.
 
 
 
 
 
Finally use [[UsesMethod::Jensen-Shannon divergence]](JS divergence) to investigate whether or not the topic distributions of the conferences are converging:
 
 
 
<math>D_{JS}(P||Q) = \frac{1}{2}D_{KL}(P||R) + \frac{1}{2}D_{KL}(Q||R)</math>
 
 
 
<math>R = \frac{1}{2}(P+Q)</math>
 
  
 +
Two ways of inference, Gibbs sampling and EM, are implemented in this paper.
  
 
== Data ==
 
== Data ==

Revision as of 02:34, 5 February 2011

Citation

Christopher DuBois, Padhraic Smyth. Modeling Relational Events via Latent Classes. KDD 2010

Online version

download here

Summary

Many social network activities can be described as a series of dyadic events. An event in this paper is defined as a triple of (sender, receiver, event_type). Authors assume that such events are generated by some latent class and in the paper they proposed a graphical model to identify the latent class as well as dyadic events with the inference implementation of Gibbs sampling and Expectation-Maximization methods.

Methodology

It's assumed that relational events are generated by following process:

  • Draw the class distribution ~ Dirichlet()
  • Draw distributions:

~ Dirichlet()

~ Dirichlet()

~ Dirichlet()

for all c in {1...C}

  • For each event

(a) Draw ~ Multinomial(), the event’s class

(b) Draw ~ Multinomial(), the event’s sender

(c) Draw ~ Multinomial(), the event’s receiver

(d) Draw ~ Multinomial(), the event’s type

Kdd2010 gm.png

It's not hard to work out the likelihood for the data:

Kdd2010 joint.png

Two ways of inference, Gibbs sampling and EM, are implemented in this paper.

Data

ACL Antology

Experimental Result

  • Historical Trends in Computational Linguistics

To visualize some trend, they show the probability mass asscociated with various topics over time, plotted as (a smoothed version of) . The topics becoming more prominent are such as classification, probabilistic models, stat. parsing, stat. MT and lex. sem, while the topics declined are computational semantics, conceptual semantics and plan-based dialogue and discourse.


  • Is Computational Linguistics Becoming More Applied?

Look at trends over time for some applications such as Machine Translation, Spelling Correction, Dialogue Systems etc and found there is a clear trend toward an increase in applications over time.


  • Differences and Similarities Among COLING, ACL and EMNLP

Inferred from the topic entropy, COLING has been historically the broadest of the three conferences; ACL started with a fairly narrow focus, became nearly as broad as COLING during the 1990's but become more narrow again in recent years; EMNLP shows being its status as a "special interest" conference.

From the JS divergence, they showed all of the three conferences are converging to their topics.

Related papers

Blei and Lafferty, ICML2006: David Blei and John D. Lafferty. 2006. Dynamic topic models. ICML.

Wang and McCallum, KDD2006: Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-Markov continuous-time model of topical trends. In KDD, pages 424–433, New York, NY, USA. ACM.