Difference between revisions of "Relational topic model"

From Cohen Courses
Jump to navigationJump to search
m (1 revision)
 
 
Line 1: Line 1:
 
This is a technical [[category::method]] discussed in [[Social Media Analysis 10-802 in Spring 2010]].
 
This is a technical [[category::method]] discussed in [[Social Media Analysis 10-802 in Spring 2010]].
  
The relational topic model (RTM) extends the LDA [[ExtendsMethod::Topic model]] by modeling the presence of (observed) links between documents. The presence of a link between documents depends on the similarity of their respective topics. An eta vector allows the model to capture the fact that some topics are more important than others in predicting links. For example, if there is a separate topic for spam and another one for general conversation, then documents that talk about spam are more likely to link to each other than documents that talk about general conversation. Thus, the link-prediction is a [[category::logistic regression]] with weight vector eta, and a measure of similarity of the two documents, using Hadamad product of the topic weights.
+
The relational topic model (RTM) extends the LDA [[Topic model]] by modeling the presence of (observed) links between documents. The presence of a link between documents depends on the similarity of their respective topics. An eta vector allows the model to capture the fact that some topics are more important than others in predicting links. For example, if there is a separate topic for spam and another one for general conversation, then documents that talk about spam are more likely to link to each other than documents that talk about general conversation. Thus, the link-prediction is a [[category::logistic regression]] with weight vector eta, and a measure of similarity of the two documents, using Hadamad product of the topic weights.
  
 
The RTM uses variational inference to infer a posterior over the hidden variables. Variational EM is then used to learn the parameter values. It models only undirected links, and the inference complexity scales with the number of links (instead of number of document pairs), thus enabling efficient inference when the citation graph is sparse.
 
The RTM uses variational inference to infer a posterior over the hidden variables. Variational EM is then used to learn the parameter values. It models only undirected links, and the inference complexity scales with the number of links (instead of number of document pairs), thus enabling efficient inference when the citation graph is sparse.

Latest revision as of 17:09, 1 February 2011

This is a technical method discussed in Social Media Analysis 10-802 in Spring 2010.

The relational topic model (RTM) extends the LDA Topic model by modeling the presence of (observed) links between documents. The presence of a link between documents depends on the similarity of their respective topics. An eta vector allows the model to capture the fact that some topics are more important than others in predicting links. For example, if there is a separate topic for spam and another one for general conversation, then documents that talk about spam are more likely to link to each other than documents that talk about general conversation. Thus, the link-prediction is a logistic regression with weight vector eta, and a measure of similarity of the two documents, using Hadamad product of the topic weights.

The RTM uses variational inference to infer a posterior over the hidden variables. Variational EM is then used to learn the parameter values. It models only undirected links, and the inference complexity scales with the number of links (instead of number of document pairs), thus enabling efficient inference when the citation graph is sparse.


Relevant Papers