HOME

TheInfoList



OR:

Latent semantic analysis (LSA) is a technique in
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, in particular
distributional semantics Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. T ...
, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the
distributional hypothesis Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. T ...
). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents. An information retrieval technique using latent semantic structure was patented in 1988
US Patent 4,839,853
now expired) by Scott Deerwester,
Susan Dumais Susan Dumais (born August 11, 1953) is an American computer scientist who is a leader in the field of information retrieval, and has been a significant contributor to Microsoft's search technologies. According to Mary Jane Irwin, who heads the At ...
, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
, it is sometimes called latent semantic indexing (LSI).


Overview


Occurrence matrix

LSA can use a
document-term matrix A document-term matrix is a mathematical matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix i ...
which describes the occurrences of terms in documents; it is a
sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.


Rank lowering

After the construction of the occurrence matrix, LSA finds a
low-rank approximation In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating ...
to the term-document matrix. There could be various reasons for these approximations: * The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an ''approximation'' (a "least and necessary evil"). * The original term-document matrix is presumed ''noisy'': for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a ''de-noisified matrix'' (a better matrix than the original). * The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually ''in'' each document, whereas we might be interested in all words ''related to'' each document—generally a much larger set due to
synonymy A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are all ...
. The consequence of the rank lowering is that some dimensions are combined and depend on more than one term: :: --> This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with
polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a single ...
, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.


Derivation

Let X be a matrix where element (i,j) describes the occurrence of term i in document j (this can be, for example, the frequency). X will look like this: : \begin & \textbf_j \\ & \downarrow \\ \textbf_i^T \rightarrow & \begin x_ & \dots & x_ & \dots & x_ \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ x_ & \dots & x_ & \dots & x_ \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ x_ & \dots & x_ & \dots & x_ \\ \end \end Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document: :\textbf_i^T = \begin x_ & \dots & x_ & \dots & x_ \end Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term: :\textbf_j = \begin x_ \\ \vdots \\ x_ \\ \vdots \\ x_ \\ \end Now the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algeb ...
\textbf_i^T \textbf_p between two term vectors gives the correlation between the terms over the set of documents. The
matrix product In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
X X^T contains all these dot products. Element (i,p) (which is equal to element (p,i)) contains the dot product \textbf_i^T \textbf_p ( = \textbf_p^T \textbf_i). Likewise, the matrix X^T X contains the dot products between all the document vectors, giving their correlation over the terms: \textbf_j^T \textbf_q = \textbf_q^T \textbf_j. Now, from the theory of linear algebra, there exists a decomposition of X such that U and V are
orthogonal matrices In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity ma ...
and \Sigma is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal ma ...
. This is called a
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
(SVD): : \begin X = U \Sigma V^T \end The matrix products giving us the term and document correlations then become : \begin X X^T &=& (U \Sigma V^T) (U \Sigma V^T)^T = (U \Sigma V^T) (V^ \Sigma^T U^T) = U \Sigma V^T V \Sigma^T U^T = U \Sigma \Sigma^T U^T \\ X^T X &=& (U \Sigma V^T)^T (U \Sigma V^T) = (V^ \Sigma^T U^T) (U \Sigma V^T) = V \Sigma^T U^T U \Sigma V^T = V \Sigma^T \Sigma V^T \end Since \Sigma \Sigma^T and \Sigma^T \Sigma are diagonal we see that U must contain the
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
s of X X^T, while V must be the eigenvectors of X^T X. Both products have the same non-zero eigenvalues, given by the non-zero entries of \Sigma \Sigma^T, or equally, by the non-zero entries of \Sigma^T\Sigma. Now the decomposition looks like this: : \begin & X & & & U & & \Sigma & & V^T \\ & (\textbf_j) & & & & & & & (\hat_j) \\ & \downarrow & & & & & & & \downarrow \\ (\textbf_i^T) \rightarrow & \begin x_ & \dots & x_ & \dots & x_ \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ x_ & \dots & x_ & \dots & x_ \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ x_ & \dots & x_ & \dots & x_ \\ \end & = & (\hat_i^T) \rightarrow & \begin \begin \, \\ \, \\ \textbf_1 \\ \, \\ \,\end \dots \begin \, \\ \, \\ \textbf_l \\ \, \\ \, \end \end & \cdot & \begin \sigma_1 & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \sigma_l \\ \end & \cdot & \begin \begin & & \textbf_1 & & \end \\ \vdots \\ \begin & & \textbf_l & & \end \end \end The values \sigma_1, \dots, \sigma_l are called the singular values, and u_1, \dots, u_l and v_1, \dots, v_l the left and right singular vectors. Notice the only part of U that contributes to \textbf_i is the i\textrm row. Let this row vector be called \hat^T_i. Likewise, the only part of V^T that contributes to \textbf_j is the j\textrm column, \hat_j. These are ''not'' the eigenvectors, but ''depend'' on ''all'' the eigenvectors. It turns out that when you select the k largest singular values, and their corresponding singular vectors from U and V, you get the rank k approximation to X with the smallest error (
Frobenius norm In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m row ...
). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector \hat^T_i then has k entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector \hat_j is an approximation in this lower-dimensional space. We write this approximation as :X_k = U_k \Sigma_k V_k^T You can now do the following: * See how related documents j and q are in the low-dimensional space by comparing the vectors \Sigma_k \cdot \hat_j and \Sigma_k \cdot \hat_q (typically by cosine similarity). * Comparing terms i and p by comparing the vectors \Sigma_k \cdot \hat_i and \Sigma_k \cdot \hat_p. Note that \hat is now a column vector. * Documents and term vector representations can be clustered using traditional clustering algorithms like k-means using similarity measures like cosine. * Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space. To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents: :\hat_j = \Sigma_k^U_k^T_j Note here that the inverse of the diagonal matrix \Sigma_k may be found by inverting each nonzero value within the matrix. This means that if you have a query vector q, you must do the translation \hat = \Sigma_k^ U_k^T \textbf before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors: :\textbf_i^T = \hat_i^T \Sigma_k V_k^T :\hat_i^T = \textbf_i^T V_k^ \Sigma_k^ = \textbf_i^T V_k \Sigma_k^ :\hat_i = \Sigma_k^ V_k^T \textbf_i


Applications

The new low-dimensional space typically can be used to: * Compare the documents in the low-dimensional space (
data clustering Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
,
document classification Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") ...
). * Find similar documents across languages, after analyzing a base set of translated documents ( cross-language information retrieval). * Find relations between terms (
synonymy A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are all ...
and
polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a single ...
). * Given a query of terms, translate it into the low-dimensional space, and find matching documents (
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
). * Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions
MCQ ''McQ'' is a 1974 American Panavision neo-noir crime action film directed by John Sturges and starring John Wayne. It costars Eddie Albert, Diana Muldaur, and Al Lettieri, and features Colleen Dewhurst, Clu Gulager, David Huddleston, Julian Ch ...
answering model. * Expand the feature space of machine learning / text mining systems * Analyze word association in text corpus Synonymy and polysemy are fundamental problems in
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
: * Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "
physicians A physician (American English), medical practitioner (Commonwealth English), medical doctor, or simply doctor, is a health professional who practices medicine, which is concerned with promoting, maintaining or restoring health through th ...
", even though the words have the same meaning. * Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.


Commercial applications

LSA has been used to assist in performing
prior art Prior art (also known as state of the art or background art) is a concept in patent law used to determine the patentability of an invention, in particular whether an invention meets the novelty and the inventive step or non-obviousness criteria ...
searches for
patents A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention."A p ...
.


Applications in human memory

The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of
free recall Free recall is a common task in the psychological study of memory. In this task, participants study a list of items on each trial, and then are prompted to recall the items in any order. Items are usually presented one at a time for a short d ...
and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect. When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall. Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.


Implementation

The SVD is typically computed using large matrix methods (for example,
Lanczos method The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n Hermitian matrix ...
s) but may also be computed incrementally and with greatly reduced resources via a
neural network A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
-like approach, which does not require the large, full-rank matrix to be held in memory. A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed.MATLAB
an
Python
implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.


Limitations

Some of LSA's drawbacks include: * The resulting dimensions might be difficult to interpret. For instance, in :: ↦ :the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to :: ↦ :will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified on account of the fact that both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task. * LSA can only partially capture
polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a single ...
(i.e., multiple meanings of a word) because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an ''average'' of all the word's different meanings in the corpus, which can make it difficult for comparison. However, the effect is often lessened due to words having a predominant sense throughout a corpus (i.e. not all meanings are equally likely). * Limitations of
bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
(BOW), where a text is represented as an unordered collection of words. To address some of the limitation of
bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
(BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order
co-occurrence In linguistics, co-occurrence or cooccurrence is an above-chance frequency of occurrence of two terms (also known as coincidence or concurrence) from a text corpus alongside each other in a certain order. Co-occurrence in this linguistic sense can ...
s among terms. * The
probabilistic model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
of LSA does not match observed data: LSA assumes that words and documents form a joint
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
model (
ergodic hypothesis In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., ...
), while a
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
has been observed. Thus, a newer alternative is
probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one c ...
, based on a multinomial model, which is reported to give better results than standard LSA.


Alternative methods


Semantic hashing

In semantic hashing documents are mapped to memory addresses by means of a
neural network A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
in such a way that semantically similar documents are located at nearby addresses.
Deep neural network Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. ...
essentially builds a
graphical model A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability ...
of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method.


Latent semantic indexing

Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
(SVD) to identify patterns in the relationships between the terms and
concept Concepts are defined as abstract ideas. They are understood to be the fundamental building blocks of the concept behind principles, thoughts and beliefs. They play an important role in all aspects of cognition. As such, concepts are studied by sev ...
s contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40. LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business in ...
built from word counts in documents. Called " indexing" because of its ability to correlate related terms that are in a collection of text, it was first applied to text at Bellcore in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.


Benefits of LSI

LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
systems. As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant. LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.Landauer, T., et al.
Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report
M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51.
Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories. LSI uses ''example'' documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents. Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text. Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages. LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations. LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.). This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data. Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text. LSI has proven to be a useful solution to a number of conceptual matching problems. The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.


LSI timeline

*Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick) *1988 – Seminal paper on LSI technique published *1989 – Original patent granted *1992 – First use of LSI to assign articles to reviewers *1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.) *1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.) *1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text ( SAIC). *2002 – LSI-based product offering to intelligence-based government agencies (SAIC)


Mathematics of LSI

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.


Term-document matrix

LSI begins by constructing a term-document matrix, A, to identify the occurrences of the m unique terms within a collection of n documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, a_, initially representing the number of times the associated term appears in the indicated document, \mathrm. This matrix is usually very large and very sparse. Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, a_ of A, to be the product of a local term weight, l_, which describes the relative frequency of a term in a document, and a global weight, g_i, which describes the relative frequency of the term within the entire collection of documents. Some common local weighting functions are defined in the following table. Some common global weighting functions are defined in the following table. Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets. In other words, each entry a_ of A is computed as: :g_i = 1 + \sum_j \frac :a_ = g_i \ \log (\mathrm_ + 1)


Rank-reduced singular value decomposition

A rank-reduced,
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, A, into three other matrices— an ''m'' by ''r'' term-concept vector matrix T, an ''r'' by ''r'' singular values matrix S, and a ''n'' by ''r'' concept-document vector matrix, D, which satisfy the following relations: A \approx TSD^T T^T T = I_r \quad D^T D = I_r S_ \geq S_ \geq \ldots \geq S_ > 0 \quad S_ = 0 \; \text \; i \neq j In the formula, A is the supplied ''m'' by ''n'' weighted matrix of term frequencies in a collection of text where ''m'' is the number of unique terms, and ''n'' is the number of documents. T is a computed ''m'' by ''r'' matrix of term vectors where ''r'' is the rank of A—a measure of its unique dimensions ≤ min(''m,n''). S is a computed ''r'' by ''r'' diagonal matrix of decreasing singular values, and D is a computed ''n'' by ''r'' matrix of document vectors. The SVD is then truncated to reduce the rank by keeping only the largest ''k'' « ''r'' diagonal entries in the singular value matrix S, where ''k'' is typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes to ''m'' by ''k'' and ''n'' by ''k'' respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as: :::::::A ≈ A''k'' = T''k'' S''k'' D''k''T Efficient LSI algorithms only compute the first ''k'' singular values and term and document vectors as opposed to computing a full SVD and then truncating it. Note that this rank reduction is essentially the same as doing
Principal Component Analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
(PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.


Querying and augmenting LSI vector spaces

The computed T''k'' and D''k'' matrices define the term and document vector spaces, which with the computed singular values, S''k'', embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, ''d'', for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document. A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors. The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called ''folding in''. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in ) is needed.


Additional uses of LSI

It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization. Below are some other ways in which LSI is being used: * Information discovery (
eDiscovery Electronic discovery (also ediscovery or e-discovery) refers to discovery in legal proceedings such as litigation, government investigations, or Freedom of Information Act requests, where the information sought is in electronic format (often refe ...
, Government/Intelligence community, Publishing) * Automated document classification (eDiscovery, Government/Intelligence community, Publishing) * Text summarization (eDiscovery, Publishing) * Relationship discovery (Government, Intelligence community, Social Networking) * Automatic generation of link charts of individuals and organizations (Government, Intelligence community) * Matching technical papers and grants with reviewers (Government) * Online customer support (Customer Management) * Determining document authorship (Education) * Automatic keyword annotation of images * Understanding software source code (Software Engineering) * Filtering
spam Spam may refer to: * Spam (food), a canned pork meat product * Spamming, unsolicited or undesired electronic messages ** Email spam, unsolicited, undesired, or illegal email messages ** Messaging spam, spam targeting users of instant messaging ( ...
(System Administration) * Information visualizationLandauer, T., Laham, D., and Derr, M.
From Paragraph to Graph: Latent Semantic Analysis for Information Visualization
Proceedings of the National Academy of Sciences, 101, 2004, pp. 5214–5219.
* Essay scoring (Education) * Literature-based discovery * Stock returns prediction * Dream Content Analysis (Psychology) LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.


Challenges to LSI

Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source
gensim Gensim is an open-source library for unsupervised topic modeling, document indexing, retrieval by similarity, and other natural language processing functionalities, using modern statistical machine learning. Gensim is implemented in Python and ...
software package. Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents). However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008. Checking the proportion of variance retained, similar to PCA or
factor analysis Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed ...
, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.Landauer, T. K., Foltz, P. W., & Laham, D. (1998)
Introduction to Latent Semantic Analysis
Discourse Processes, 25, 259-284
When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.


See also

*
Coh-Metrix Coh-Metrix is a computational tool that produces indices of the linguistic and discourse representations of a text. Developed by Arthur C. Graesser and Danielle S. McNamara, Coh-Metrix analyzes texts on many different features. Measurements Coh-M ...
* Compound term processing *
Distributional semantics Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. T ...
*
Explicit semantic analysis In natural language processing and information retrieval, explicit semantic analysis (ESA) is a vectoral representation of text (individual words or entire documents) that uses a document corpus as a knowledge base. Specifically, in ESA, a word is ...
* Latent semantic mapping * Latent semantic structure indexing *
Principal components analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
*
Probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one c ...
*
Spamdexing Spamdexing (also known as search engine spam, search engine poisoning, black-hat search engine optimization, search spam or web spam) is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link buildin ...
* Word vector *
Topic model In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden ...
**
Latent Dirichlet allocation In natural language processing, Latent Dirichlet Allocation (LDA) is a generative statistical model that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an exa ...


References


Further reading

* * Original article where the model was first exposed. *
(PDF)
Illustration of the application of LSA to document retrieval. * * * *


External links


Articles on LSA


Latent Semantic Analysis
a scholarpedia article on LSA written by Tom Landauer, one of the creators of LSA.


Talks and demonstrations


LSA Overview
talk by Prof

describing LSA, its applications in Information Retrieval, and its connections to
probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one c ...
.
Complete LSA sample code in C# for Windows
The demo code includes enumeration of text files, filtering stop words, stemming, making a document-term matrix and SVD.


Implementations

Due to its cross-domain applications in
Information Retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
,
Natural Language Processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
(NLP), Cognitive Science and
Computational Linguistics Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics d ...
, LSA has been implemented to support many different kinds of applications.
Sense Clusters
an Information Retrieval-oriented perl implementation of LSA
S-Space Package
a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA
Semantic Vectors
applies Random Projection, LSA, and Reflective Random Indexing to
Lucene Apache Lucene is a free and open-source search engine software library, originally written in Java by Doug Cutting. It is supported by the Apache Software Foundation and is released under the Apache Software License. Lucene is widely used as a ...
term-document matrices
Infomap Project
an NLP-oriented C implementation of LSA (superseded by semanticvectors project)
Text to Matrix Generator
A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA *
Gensim Gensim is an open-source library for unsupervised topic modeling, document indexing, retrieval by similarity, and other natural language processing functionalities, using modern statistical machine learning. Gensim is implemented in Python and ...
contains a Python implementation of LSA for matrices larger than RAM. {{Natural Language Processing Information retrieval techniques Natural language processing Latent variable models Semantic relations