Difference between revisions of "Rosen-Zvi et al UAI 2004"

From Cohen Courses
Jump to navigationJump to search
Line 13: Line 13:
 
This paper proposes the generative author-topic model, which extends LDA to model document contents and authors at the same time. The interest of each author is represented with a multinomial distribution over topics, while each topic is a multinomial distribution over words. Model estimation is performed with Gibbs sampling. Experiments show the topic-author and topic-word results discovered on the NIPS and [[UsesDataset::CiteSeer]] datasets.
 
This paper proposes the generative author-topic model, which extends LDA to model document contents and authors at the same time. The interest of each author is represented with a multinomial distribution over topics, while each topic is a multinomial distribution over words. Model estimation is performed with Gibbs sampling. Experiments show the topic-author and topic-word results discovered on the NIPS and [[UsesDataset::CiteSeer]] datasets.
  
The experimented on 3 real world social graphs [[UsesDataset::Conference-Author dataset]], [[UsesDataset::Author-Paper dataset]] and [[UsesDataset::IMDB dataset]]
+
== Evaluation ==
 +
 
 +
This paper presents three aspects in the evaluation:
 +
  - qualitatively show topics and author interests modeled from the two corpus
 +
  - compare the perplexity (predictive power) of the author-topic model with LDA
 +
  - reveal potential applications of the author-topic model in author similarity computation and reviewer recommendation
 +
 
 +
== Discussion ==
 +
 
 +
-

Revision as of 21:46, 26 September 2012

This a Paper discussed in Social Media Analysis 10-802 in Fall 2012.

Citation

The Author-Topic Model for Authors and Documents. Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, Padhraic Smyth. In Proceedings of UAI 2004, pages 487-494.

Online version

The Author-Topic Model for Authors and Documents

Summary

This paper proposes the generative author-topic model, which extends LDA to model document contents and authors at the same time. The interest of each author is represented with a multinomial distribution over topics, while each topic is a multinomial distribution over words. Model estimation is performed with Gibbs sampling. Experiments show the topic-author and topic-word results discovered on the NIPS and CiteSeer datasets.

Evaluation

This paper presents three aspects in the evaluation:

 - qualitatively show topics and author interests modeled from the two corpus
 - compare the perplexity (predictive power) of the author-topic model with LDA
 - reveal potential applications of the author-topic model in author similarity computation and reviewer recommendation

Discussion

-