Latent semantic indexing

From Cohen Courses
Jump to navigationJump to search

This is a method discussed in Information Extraction 10-707 in Fall 2010.

Latent Semantic Indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in the relationships between the terms and concepts. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.

It is called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text. Also known as Latent Semantic Analysis (LSA), this method not only uncovers the underlying latent semantic structure in the usage of words in a body of text but also works in the computer vision field for the dimensionality reductions.

Three Step Process in Latent Semantic Indexing

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves (1) constructing a weighted term-document matrix or feature vector matrix in the case of computer vision, (2) performing a Singular Value Decomposition on the matrix, and (3) using the matrix to identify the concepts contained in the text.

LSI begins by constructing a term-document matrix, , to identify the occurrences of the unique terms within a collection of documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, , initially representing the number of times the associated term appears in the indicated document, . This matrix is usually very large and very sparse.

Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, of , to be the product of a local term weight, , which describes the relative frequency of a term in a document, and a global weight, , which describes the relative frequency of the term within the entire collection of documents.

A rank-reduced, Singular Value Decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by transforming the single term-frequency matrix, A, into three other matrices— a term-concept vector matrix, T, a singular values matrix, S, and a concept-document vector matrix, D, which satisfy the following relations:

A = TSDT
TTT = DT D = Ir    TTT = Im    DDT = In
S1,1 ≥ S2,2 ≥...≥ Sr,r > 0    Si,j = 0 where ij

In the formula, A, is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.

The LSI modification to a standard SVD is to reduce the rank or truncate the singular value matrix S to size k « r, typically on the order of a k in the range of 100 to 300 dimensions, effectively reducing the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:

A ≈ Ak = Tk Sk DkT

Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.

Relevant Papers