Difference between revisions of "Reisinger et al 2010: Spherical Topic Models"

From Cohen Courses
Jump to navigationJump to search
Line 66: Line 66:
 
* SAM that can contain both positive and negative entries.
 
* SAM that can contain both positive and negative entries.
 
* SAM with positive entries.
 
* SAM with positive entries.
 +
 +
In addition to the three objective experiments, the authors also did a subjective evaluation of the topic interpretability.
  
 
== Experimental Results ==
 
== Experimental Results ==

Revision as of 18:00, 28 November 2011

Citation

Joseph Reisinger, Austin Waters, Bryan Silverthorn, and Raymond J. Mooney, "Spherical Topic Models", in Proceedings of the 27th International Conference on Machine Learning (ICML 2010), 2010.

Online version

Reisinger et al 2010

Summary

This is a recent paper that presents Spherical Mixture Model and Variational Inference methods for Latent Dirichlet Allocation (LDA), which is a Bayesian generative model for general problems in Topic modeling. The highlight of this paper is that it models documents as data points in high-dimensional spherical manifold. Like cosine similarity, the model assumes the data is directional, and can be parameterized by cosine distance and other similarity measures in directional statistics. The authors claim that the spherical topic modeling approach outperforms existing models such as LDA.

Motivations

Traditional topic modeling methods, such as Latent Dirichlet Allocation (LDA), fail to model the presence and the absence of words in the target document, because they assume multinomial distribution for document likelihood. To overcome this issue, the authors propose the Spherical Admixture Model, which models both the frequency as well as the presence and absence of the words. In addition to this, by assuming von Mises-Fisher distribution, they hope to improve the system accuracy when using high-dimensional spherical modeling methods for sparse text data.

Brief Description of the method

This paper first introduces the advantages of von Mises-Fisher distribution for text, then discusses the Spherical Admixture Model and the use of Variational Inference to solve the posterior approximation problem. In this section, we will first summarize the basic characteristics of von Mises-Fisher distribution they assume, then we will introduce the definition of the proposed model, as well as the variational inference method.

von Mises-Fisher Distribution

In LDA, the multinomial distribution of words assigns probabilities to integer vectors of event counts, which is the raw counts of each words in a document in . In contrast to multimonial distribution, von Mises-Fisher (vMF) distribution is a probability distribution on the (d-1)-dimensional sphere in , where its density function is

where is the mean direction with , and is the concentration parameter. In addition,

is the normalization factor, where is the modified Bessel function of the first kind and order .

Intuitively, vMF distribution can be considered as the multivariate Gaussian with spherical covariance, parametermized by the cosine distance rather than Euclidean distance. Cosine distance is commonly used in directional statistics and computes the directions of -normalized features vectors and corresponds to the normalized correlation coefficient.

In this paper, the authors also argue that vMF is sensitive to the absence/presence of words, where multinomial distribution is not. They showed an example: if document D1 has a vector of [1,1,1] and document D2 has a vector of [3,0,0], in multinomial scenario where topic proportion , the two documents are equivalent. In contrast, vMF would compute different cosine distances.

The Spherical Admixture Model

Lda sam.png

The Spherical Admixture Model(SAM) is very different from LDA in the sense that it does not model each word given a topic distribution . Instead, it models the document, and uses a weighted directional average to combine topics. A simple generative story of SAM can be given by:

  • Draw a set of T topics on the unit hypersphere.
  • For each document d, draw topic weights from Dirichlet .
  • Draw a document vector from vMF with mean

The complete model can be represented as the following:

  • (corpus mean)
  • (topics)
  • (topic proportions)
  • (spherical average)
  • (documents)

here, is the corpus mean direction, controls of the concentration of the topics around , the elements of are the mixing proportions for the document d, is the mean of the topic t, and is the observed vector for document d.

Variational Inference

In order to set the parameters of the above model, we need to infer the posterior distribution of the topic means, topics and per-document topic proportions: . As we know it is intractable to do exact inference, so the authors proposed the variational mean field method to approximately infer the parameters. In variational mean field approach, the true posteriors are another distribution with simpler and factored parametric form. In this case, EM would be very useful to perform inference on the following approximation.

In the EM steps, the authors use gradient ascent to update the variational topic means and the per-document topic proportions in the E step.

Dataset and Experiment Settings

The authors conduct three experiments with three different datasets. In the first experiment, the authors use the CMU news-20 collections to classify Usenet posts. In the 2nd experiment, the task is to detect the thematic shifts in the Italian text of Niccolo Machiavelli's Il Principe, and the last task is to classify natural scenes in the 13-scene database. Four models are compared:

  • LDA
  • movMF - mixtures of vMF model by Banerjee et al., 2005
  • SAM that can contain both positive and negative entries.
  • SAM with positive entries.

In addition to the three objective experiments, the authors also did a subjective evaluation of the topic interpretability.

Experimental Results

The authors performed three major experiments. The first experiment is the . The second experiment explores.

Exp

[[File:]]

Exp

[[File:]]

Exp

[[File:]]

Related Papers

This paper is related to many papers in three dimensions.

(1) .

(2) .

(3) .