Difference between revisions of "Latent semantic indexing"

From Cohen Courses
Jump to navigationJump to search
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
This is a [[category::method]] discussed in [[Information Extraction 10-707 in Fall 2010]].
 
This is a [[category::method]] discussed in [[Information Extraction 10-707 in Fall 2010]].
  
'''Latent Semantic Indexing''' ('''LSI''') is an indexing and retrieval method that uses a mathematical technique called [[Singular value decomposition|Singular Value Decomposition (SVD)]] to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.<ref>Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.</ref>
+
'''Latent Semantic Indexing''' ('''LSI''') is an indexing and retrieval method that uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in the relationships between the terms and concepts. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.
  
LSI is also an application of [[correspondence analysis|Correspondence Analysis]], a multivariate statistical technique developed by [[Jean-Paul Benzécri]]<ref>{{ cite book
+
It is called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text. Also known as Latent Semantic Analysis (LSA), this method not only uncovers the underlying latent semantic structure in the usage of words in a body of text but also works in the computer vision field for the dimensionality reductions.
| author = Benzécri, J.-P.
 
| publisher=Dunod |location= Paris, France
 
| year = 1973
 
| title = L'Analyse des Données. Volume II. L'Analyse des Correspondences
 
}}</ref> in the early 1970s, to a [[contingency table|Contingency Table]] built from word counts in documents.
 
 
 
Called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text, it was first applied to text at Bell Laboratories in the late 1980s.   The method, also called [[Latent semantic analysis|Latent Semantic Analysis (LSA)]], uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches.  Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
 
  
 
__TOC__
 
__TOC__
  
== Mathematics of LSI ==
+
== Three-Step Process in Latent Semantic Indexing ==
 
 
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text.  In general, the process involves constructing a weighted term-document matrix, performing a '''Singular Value Decomposition''' on the matrix, and using the matrix to identify the concepts contained in the text.
 
  
=== Term Document Matrix ===
+
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text.  In general, the process involves (1) constructing a weighted term-document matrix or feature vector matrix in the case of computer vision, (2) performing a '''Singular Value Decomposition''' on the matrix, and (3) using the matrix to identify the concepts contained in the text.
  
 
LSI begins by constructing a term-document matrix, <math>A</math>, to identify the occurrences of the <math>m</math> unique terms within a collection of <math>n</math> documents.  In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, <math>a_{ij}</math>, initially representing the number of times the associated term appears in the indicated document, <math>\mathrm{tf_{ij}}</math>.  This matrix is usually very large and very sparse.
 
LSI begins by constructing a term-document matrix, <math>A</math>, to identify the occurrences of the <math>m</math> unique terms within a collection of <math>n</math> documents.  In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, <math>a_{ij}</math>, initially representing the number of times the associated term appears in the indicated document, <math>\mathrm{tf_{ij}}</math>.  This matrix is usually very large and very sparse.
Line 24: Line 15:
 
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data.  The weighting functions transform each cell, <math>a_{ij}</math> of <math>A</math>, to be the product of a local term weight, <math>l_{ij}</math>, which describes the relative frequency of a term in a document, and a global weight, <math>g_i</math>, which describes the relative frequency of the term within the entire collection of documents.
 
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data.  The weighting functions transform each cell, <math>a_{ij}</math> of <math>A</math>, to be the product of a local term weight, <math>l_{ij}</math>, which describes the relative frequency of a term in a document, and a global weight, <math>g_i</math>, which describes the relative frequency of the term within the entire collection of documents.
  
Some common local weighting functions <ref>
+
A rank-reduced, Singular Value Decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text.  The SVD forms the foundation for LSI. It computes the term and document vector spaces by transforming the single term-frequency matrix, '''A''', into three other matrices— a term-concept vector matrix, '''T''', a singular values matrix, '''S''', and a concept-document vector matrix, '''D''', which satisfy the following relations:
Berry, M. W., and Browne, M., Understanding Search Engines: Mathematical Modeling and Text Retrieval, Society for Industrial and Applied Mathematics, Philadelphia, (2005).</ref> are defined in the following table.
 
 
 
{| class="wikitable" border="1" style="width:60%" cellpadding="25" cellspacing="5" align="center"
 
|-
 
|  style="width:22%" | '''Binary''' ||
 
| <math>l_{ij} = 1</math> if the term exists in the document, or else <math>0</math>
 
|-
 
|  style="width:22%" | '''TermFrequency''' ||
 
| <math>l_{ij} = \mathrm{tf}_{ij}</math>, the number of occurrences of term <math>i</math> in document <math>j</math>
 
|-
 
|  style="width:22%" | '''Log''' ||
 
| <math>l_{ij} = \log(\mathrm{tf}_{ij} + 1)</math>
 
|-
 
|  style="width:22%" | '''Augnorm''' ||
 
| <math>l_{ij} = \frac{\Big(\frac{\mathrm{tf}_{ij}}{\max_i(\mathrm{tf}_{ij})}\Big) + 1}{2}</math>
 
|}
 
 
 
Some common global weighting functions are defined in the following table.
 
 
 
{| class="wikitable" border="1" style="width:60%" cellpadding="25" cellspacing="5" align="center"
 
|-
 
| style="width:22%" | '''Binary''' ||
 
| <math>g_i = 1</math>
 
|-
 
| style="width:22%" | '''Normal''' ||
 
| <math>g_i = \frac{1}{\sqrt{\sum_j \mathrm{tf}_{ij}^2}}</math>
 
|-
 
| style="width:22%" | '''GfIdf''' ||
 
| <math>g_i = \mathrm{gf}_i / \mathrm{df}_i</math>, where <math>\mathrm{gf}_i</math> is the total number of times term <math>i</math> occurs in the whole collection, and <math>\mathrm{df}_i</math> is the number of documents in which term <math>i</math> occurs.
 
|-
 
| style="width:22%" | '''Idf''' ||
 
| <math>g_i = 1+ \log_2 \frac{n}{\mathrm{df}_i}</math>
 
|-
 
| style="width:22%" | '''Entropy''' ||
 
| <math>g_i = 1 + \sum_j \frac{p_{ij} \log p_{ij}}{\log n}</math>, where <math>p_{ij} = \frac{\mathrm{tf}_{ij}}{\mathrm{gf}_i}</math>
 
|}
 
 
 
Empirical studies with LSI report that the Log Entropy weighting functions work well, in practice, with many data sets{{Citation needed|reason=what studies?|date=May 2010}}.  In other words, each entry <math>a_{ij}</math> of <math>A</math> is computed as:
 
 
 
<math>g_i = 1 + \sum_j \frac{p_{ij} \log p_{ij}}{\log n}</math>
 
 
 
<math>a_{ij} = g_i \ \log (\mathrm{tf}_{ij} + 1)</math>
 
 
 
=== Rank-Reduced Singular Value Decomposition ===
 
 
 
A rank-reduced, Singular Value Decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text.  The SVD forms the foundation for LSI.<ref>Berry, Michael W., Dumais, Susan T., O'Brien, Gavin W., Using Linear Algebra for Intelligent Information Retrieval, December 1994, SIAM Review 37:4 (1995), pp. 573–595.</ref>  It computes the term and document vector spaces by transforming the single term-frequency matrix, '''A''', into three other matrices— a term-concept vector matrix, '''T''', a singular values matrix, '''S''', and a concept-document vector matrix, '''D''', which satisfy the following relations:
 
  
 
:::::::'''A''' = '''TSD<sup>T</sup>'''
 
:::::::'''A''' = '''TSD<sup>T</sup>'''
Line 84: Line 29:
  
 
Efficient LSI algorithms only compute the first '''''k''''' singular values and term and document vectors as opposed to computing a full SVD and then truncating it.
 
Efficient LSI algorithms only compute the first '''''k''''' singular values and term and document vectors as opposed to computing a full SVD and then truncating it.
 
== Querying and Augmenting LSI Vector Spaces ==
 
 
The computed '''T''<sub>k''</sub>''' and '''D''<sub>k''</sub>''' matrices define the term and document vector spaces, which with the computed singular values, '''S''<sub>k''</sub>''', embody the conceptual information derived from the document collection.  The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
 
 
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index.  By a simple transformation of the '''A = T S D<sup>T</sup>''' equation into the equivalent '''D = A<sup>T</sup> T S<sup>−1</sup>''' equation, a new vector, '''''d''''', for a query or for a new document can be created by computing a new column in '''A''' and then multiplying the new column by '''T S<sup>−1</sup>'''.  The new column in '''A''' is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
 
 
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored.  These terms will have no impact on the global weights and learned correlations derived from the original collection of text.  However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
 
 
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called ''folding-in''.  Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added.  When the terms and concepts of a new set of documents need to be included in an LSI index, the term-document matrix, and the SVD, must either be recomputed or an incremental update method (such as the one described in <ref name="brand2006">{{cite journal | url=http://www.merl.com/reports/docs/TR2006-059.pdf |format=PDF| title=Fast Low-Rank Modifications of the Thin Singular Value Decomposition | author=Matthew Brand | journal=Linear Algebra and Its Applications | volume=415 | pages=20–30 | year=2006 | doi=10.1016/j.laa.2005.07.021 }}</ref>) be used.
 
  
 
== Relevant Papers ==
 
== Relevant Papers ==

Latest revision as of 00:36, 1 December 2010

This is a method discussed in Information Extraction 10-707 in Fall 2010.

Latent Semantic Indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in the relationships between the terms and concepts. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.

It is called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text. Also known as Latent Semantic Analysis (LSA), this method not only uncovers the underlying latent semantic structure in the usage of words in a body of text but also works in the computer vision field for the dimensionality reductions.

Three-Step Process in Latent Semantic Indexing

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves (1) constructing a weighted term-document matrix or feature vector matrix in the case of computer vision, (2) performing a Singular Value Decomposition on the matrix, and (3) using the matrix to identify the concepts contained in the text.

LSI begins by constructing a term-document matrix, , to identify the occurrences of the unique terms within a collection of documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, , initially representing the number of times the associated term appears in the indicated document, . This matrix is usually very large and very sparse.

Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, of , to be the product of a local term weight, , which describes the relative frequency of a term in a document, and a global weight, , which describes the relative frequency of the term within the entire collection of documents.

A rank-reduced, Singular Value Decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by transforming the single term-frequency matrix, A, into three other matrices— a term-concept vector matrix, T, a singular values matrix, S, and a concept-document vector matrix, D, which satisfy the following relations:

A = TSDT
TTT = DT D = Ir    TTT = Im    DDT = In
S1,1 ≥ S2,2 ≥...≥ Sr,r > 0    Si,j = 0 where ij

In the formula, A, is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.

The LSI modification to a standard SVD is to reduce the rank or truncate the singular value matrix S to size k « r, typically on the order of a k in the range of 100 to 300 dimensions, effectively reducing the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:

A ≈ Ak = Tk Sk DkT

Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.

Relevant Papers