Overview
Occurrence matrix
LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.Rank lowering
After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations: * The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an ''approximation'' (a "least and necessary evil"). * The original term-document matrix is presumed ''noisy'': for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a ''de-noisified matrix'' (a better matrix than the original). * The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually ''in'' each document, whereas we might be interested in all words ''related to'' each document—generally a much larger set due to synonymy. The consequence of the rank lowering is that some dimensions are combined and depend on more than one term: :: → This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.Derivation
Let be a matrix where element describes the occurrence of term in document (this can be, for example, the frequency). will look like this: : Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document: : Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term: : Now theApplications
The new low-dimensional space typically can be used to: * Compare the documents in the low-dimensional space ( data clustering, document classification). * Find similar documents across languages, after analyzing a base set of translated documents ( cross-language information retrieval). * Find relations between terms ( synonymy and polysemy). * Given a query of terms, translate it into the low-dimensional space, and find matching documents (Commercial applications
LSA has been used to assist in performing prior art searches for patents.Applications in human memory
The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect. When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall. Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.Implementation
The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory. A fast, incremental, low-memory, large-matrix SVD algorithm has been developed. MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.Limitations
Some of LSA's drawbacks include: * The resulting dimensions might be difficult to interpret. For instance, in :: ↦ :the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to :: ↦ :will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified because both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task. * LSA can only partially capture polysemy (i.e., multiple meanings of a word) because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an ''average'' of all the word's different meanings in the corpus, which can make it difficult for comparison. However, the effect is often lessened due to words having a predominant sense throughout a corpus (i.e. not all meanings are equally likely). * Limitations of bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of bag of words model (BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order co-occurrences among terms. * The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model ( ergodic hypothesis), while a Poisson distribution has been observed. Thus, a newer alternative is probabilistic latent semantic analysis, based on a multinomial model, which is reported to give better results than standard LSA.Alternative methods
Semantic hashing
In semantic hashing documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method.Latent semantic indexing
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms andBenefits of LSI
LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users ofLSI timeline
*Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick) *1988 – Seminal paper on LSI technique published *1989 – Original patent granted *1992 – First use of LSI to assign articles to reviewers *1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.) *1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.) *1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text ( SAIC). *2002 – LSI-based product offering to intelligence-based government agencies (SAIC)Mathematics of LSI
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.Term-document matrix
LSI begins by constructing a term-document matrix, , to identify the occurrences of the unique terms within a collection of documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, , initially representing the number of times the associated term appears in the indicated document, . This matrix is usually very large and very sparse. Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, of , to be the product of a local term weight, , which describes the relative frequency of a term in a document, and a global weight, , which describes the relative frequency of the term within the entire collection of documents. Some common local weighting functions are defined in the following table. Some common global weighting functions are defined in the following table. Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets. In other words, each entry of is computed as: : :Rank-reduced singular value decomposition
A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, , into three other matrices— an ''m'' by ''r'' term-concept vector matrix , an ''r'' by ''r'' singular values matrix , and a ''n'' by ''r'' concept-document vector matrix, , which satisfy the following relations: In the formula, A is the supplied ''m'' by ''n'' weighted matrix of term frequencies in a collection of text where ''m'' is the number of unique terms, and ''n'' is the number of documents. T is a computed ''m'' by ''r'' matrix of term vectors where ''r'' is the rank of A—a measure of its unique dimensions ≤ min(''m,n''). S is a computed ''r'' by ''r'' diagonal matrix of decreasing singular values, and D is a computed ''n'' by ''r'' matrix of document vectors. The SVD is then truncated to reduce the rank by keeping only the largest ''k'' « ''r'' diagonal entries in the singular value matrix S, where ''k'' is typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes to ''m'' by ''k'' and ''n'' by ''k'' respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as: :::::::A ≈ A''k'' = T''k'' S''k'' D''k''T Efficient LSI algorithms only compute the first ''k'' singular values and term and document vectors as opposed to computing a full SVD and then truncating it. Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.Querying and augmenting LSI vector spaces
The computed T''k'' and D''k'' matrices define the term and document vector spaces, which with the computed singular values, S''k'', embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, ''d'', for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document. A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors. The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called ''folding in''. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in ) is needed.Additional uses of LSI
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization. Below are some other ways in which LSI is being used: * Information discovery ( eDiscovery, Government/Intelligence community, Publishing) * Automated document classification (eDiscovery, Government/Intelligence community, Publishing) * Text summarization (eDiscovery, Publishing) * Relationship discovery (Government, Intelligence community, Social Networking) * Automatic generation of link charts of individuals and organizations (Government, Intelligence community) * Matching technical papers and grants with reviewers (Government) * Online customer support (Customer Management) * Determining document authorship (Education) * Automatic keyword annotation of images * Understanding software source code (Software Engineering) * FilteringChallenges to LSI
Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim software package. Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents). However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008. Checking the proportion of variance retained, similar to PCA or factor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.Landauer, T. K., Foltz, P. W., & Laham, D. (1998)See also
* Coh-Metrix * Compound term processing * Distributional semantics *References
Further reading
* * Original article where the model was first exposed. *External links
Articles on LSA
Talks and demonstrations
Implementations
Due to its cross-domain applications in