HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for
data clustering Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each o ...
and
nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: ...
. It differs from conventional hashing techniques in that
hash collision In computer science, a hash collision or hash clash is when two distinct pieces of data in a hash table share the same hash value. The hash value in this case is derived from a hash function which takes a data input and returns a fixed length of ...
s are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items. Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH). Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of
massively parallel Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations in parallel. GPUs are massively parallel architecture with tens of thousands of ...
algorithms that use randomized routing and
universal hashing In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees ...
to reduce memory contention and
network congestion Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of ...
.


Definitions

A finite family \mathcal F of functions h\colon M \to S is defined to be an ''LSH family'' for * a
metric space In mathematics, a metric space is a Set (mathematics), set together with a notion of ''distance'' between its Element (mathematics), elements, usually called point (geometry), points. The distance is measured by a function (mathematics), functi ...
\mathcal M =(M, d), * a threshold r>0, * an approximation factor c>1, * and probabilities p_1 > p_2 if it satisfies the following condition. For any two points a, b \in M and a hash function h chosen uniformly at random from \mathcal F: * If d(a,b) \le r, then h(a)=h(b) (i.e., and collide) with probability at least p_1, * If d(a,b) \ge cr, then h(a)=h(b) with probability at most p_2. Such a family \mathcal F is called (r,cr,p_1,p_2)-sensitive.


LSH with respect to a similarity measure

Alternatively it is possible to define an LSH family on a universe of items endowed with a similarity function \phi\colon U \times U \to ,1/math>. In this setting, a LSH scheme is a family of
hash function A hash function is any Function (mathematics), function that can be used to map data (computing), data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a ...
s coupled with a
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
over such that a function h \in H chosen according to satisfies Pr (a) = h(b)= \phi(a,b) for each a,b \in U.


Amplification

Given a (d_1, d_2, p_1, p_2)-sensitive family \mathcal F, we can construct new families \mathcal G by either the AND-construction or OR-construction of \mathcal F. To create an AND-construction, we define a new family \mathcal G of hash functions , where each function is constructed from random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if all h_i(x) = h_i(y) for i = 1, 2, \ldots, k. Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, p_^k, p_^k)-sensitive family. To create an OR-construction, we define a new family \mathcal G of hash functions , where each function is constructed from random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if h_i(x) = h_i(y) for one or more values of . Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, 1- (1 - p_1)^k, 1 - (1 - p_2)^k)-sensitive family.


Applications

LSH has been applied to several problem domains, including: *Near-duplicate detection *
Hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two ...
*
Genome-wide association study In genomics, a genome-wide association study (GWA study, or GWAS), is an observational study of a genome-wide set of Single-nucleotide polymorphism, genetic variants in different individuals to see if any variant is associated with a trait. GWA s ...
*Image similarity identification ** VisualRank *
Gene expression Gene expression is the process (including its Regulation of gene expression, regulation) by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, ...
similarity identification * Audio similarity identification *
Nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: ...
* Audio fingerprint * Digital video fingerprinting * Shared memory organization in
parallel computing Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ...
*Physical data organization in database management systems *Training fully connected neural networks *Computer security *
Machine Learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...


Methods


Bit sampling for Hamming distance

One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the
Hamming distance In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number ...
over -dimensional vectors \^d. Here, the family \mathcal F of hash functions is simply the family of all the projections of points on one of the d coordinates, i.e., =\, where x_i is the ith coordinate of x. A random function h from simply selects a random bit from the input point. This family has the following parameters: P_1=1-R/d, P_2=1-cR/d. That is, any two vectors x,y with Hamming distance at most R collide under a random h with probability at least P_1. Any x,y with Hamming distance at least cR collide with probability at most P_2.


Min-wise independent permutations

Suppose is composed of subsets of some ground set of enumerable items and the similarity function of interest is the Jaccard index . If is a permutation on the indices of , for A \subseteq S let h(A) = \min_ \. Each possible choice of defines a single hash function mapping input sets to elements of . Define the function family to be the set of all such functions and let be the uniform distribution. Given two sets A,B \subseteq S the event that h(A) = h(B) corresponds exactly to the event that the minimizer of over A \cup B lies inside A \cap B. As was chosen uniformly at random, Pr (A) = h(B)= J(A,B)\, and (H,D)\, define an LSH scheme for the Jaccard index. Because the
symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric grou ...
on elements has size !, choosing a truly
random permutation A random permutation is a sequence where any order of its items is equally likely at random, that is, it is a permutation-valued random variable of a set of objects. The use of random permutations is common in games of chance and in randomized alg ...
from the full symmetric group is infeasible for even moderately sized . Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen . It has been established that a min-wise independent family of permutations is at least of size \operatorname\ \ge e^, and that this bound is tight. Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most . Approximate min-wise independence differs from the property by at most a fixed .


Open source methods


Nilsimsa Hash

Nilsimsa is a locality-sensitive hashing algorithm used in
anti-spam Various anti-spam techniques are used to prevent email spam (unsolicited bulk email). No technique is a complete solution to the spam problem, and each has trade-offs between incorrectly rejecting legitimate email ( false positives) as opposed ...
efforts. The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements: # The digest identifying each message should not vary significantly for changes that can be produced automatically. # The encoding must be robust against intentional attacks. # The encoding should support an extremely low risk of false positives. Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.


TLSH

TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar. An implementation of TLSH is available as
open-source software Open-source software (OSS) is Software, computer software that is released under a Open-source license, license in which the copyright holder grants users the rights to use, study, change, and Software distribution, distribute the software an ...
.


Random projection

The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos) uses an approximation of the
cosine distance In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided ...
between vectors. The technique was used to approximate the NP-complete max-cut problem. The basic idea of this technique is to choose a random
hyperplane In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is ...
(defined by a normal unit vector ) at the outset and use the hyperplane to hash input vectors. Given an input vector and a hyperplane defined by , we let h(v) = \sgn(v \cdot r). That is, h(v) = \pm 1 depending on which side of the hyperplane lies. This way, each possible choice of a random hyperplane can be interpreted as a hash function h(v). For two vectors with angle \theta(u,v) between them, it can be shown that :Pr (u) = h(v)= 1 - \frac. Since the ratio between \frac and 1-\cos(\theta(u,v)) is at least 0.439 when \theta(u, v) \in , \pi/math>, the probability of two vectors being on different sides of the random hyperplane is approximately proportional to the
cosine distance In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided ...
between them.


Stable distributions

The hash function h_ (\boldsymbol) : \mathcal^d \to \mathcal maps a -dimensional vector \boldsymbol onto the set of integers. Each hash function in the family is indexed by a choice of random \mathbf and b where \mathbf is a -dimensional vector with entries chosen independently from a
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be st ...
and b is a real number chosen uniformly from the range ,r For a fixed \mathbf,b the hash function h_ is given by h_ (\boldsymbol) = \left \lfloor \frac \right \rfloor . Other construction methods for hash functions have been proposed to better fit the data. In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.


Semantic hashing

Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher
semantic similarity Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tool ...
. The hashcodes are found via training of an
artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
or
graphical model A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. Graphical models are commonly used in ...
.


Algorithm for nearest neighbor search

One of the main applications of LSH is to provide a method for efficient approximate
nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: ...
algorithms. Consider an LSH family \mathcal F. The algorithm has two main parameters: the width parameter and the number of hash tables . In the first step, we define a new family \mathcal G of hash functions , where each function is obtained by concatenating functions h_1, \ldots, h_k from \mathcal F, i.e., g(p) = _1(p), \ldots, h_k(p)/math>. In other words, a random hash function is obtained by concatenating randomly chosen hash functions from \mathcal F. The algorithm then constructs hash tables, each corresponding to a different randomly chosen hash function . In the preprocessing step we hash all -dimensional points from the data set into each of the hash tables. Given that the resulting hash tables have only non-zero entries, one can reduce the amount of memory used per each hash table to O(n) using standard
hash functions A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called ''hash values'', ...
. Given a query point , the algorithm iterates over the hash functions . For each considered, it retrieves the data points that are hashed into the same bucket as . The process is stopped as soon as a point within distance from is found. Given the parameters and , the algorithm has the following performance guarantees: * preprocessing time: O(nLkt), where is the time to evaluate a function h \in \mathcal F on an input point ; * space: O(nL), plus the space for storing data points; * query time: O(L(kt+dnP_2^k)); * the algorithm succeeds in finding a point within distance from (if there exists a point within distance ) with probability at least 1 - ( 1 - P_1^k ) ^ L; For a fixed approximation ratio c=1+\epsilon and probabilities P_1 and P_2, one can set k = \left\lceil\tfrac\right\rceil and L = \lceil P_1^\rceil = O(n^P_1^), where \rho=. Then one obtains the following performance guarantees: * preprocessing time: O(n^P_1^kt); * space: O(n^P_1^), plus the space for storing data points; * query time: O(n^P_1^(kt+d));


Finding nearest neighbor without fixed dimensionality

To generalize the above algorithm without radius being fixed, we can take the algorithm and do a sort of binary search over . It has been shown that there is a data structure for the approximate nearest neighbor with the following performance guarantees: * space: O(n^P_1^d\log^2 n); * query time: O(n^P_1^(kt+d)\log n); * the algorithm succeeds in finding the nearest neighbor with probability at least 1 - (( 1 - P_1^k ) ^ L\log n);


Improvements

When is large, it is possible to reduce the hashing time from O(n^). This was shown by and which gave * query time: O(t\log^2(1/P_2)/P_1 + n^(d + 1/P_1)); * space: O(n^/P_1 + \log^2(1/P_2)/P_1); It is also sometimes the case that the factor 1/P_1 can be very large. This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query. In it was shown how to reduce the query time to O(n^\rho/P_1^) (not including hashing costs) and similarly the space usage.


See also

* * * * * * * *
Random indexing Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimensi ...
Gorman, James, and James R. Curran
"Scaling distributional similarity to large corpora."
Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006.
* * * * *


References


Further reading

*Samet, H. (2006) ''Foundations of Multidimensional and Metric Data Structures''. Morgan Kaufmann. * *


External links




LSHKIT: A C++ Locality Sensitive Hashing Library

A Python Locality Sensitive Hashing library that optionally supports persistence via redis

Caltech Large Scale Image Search Toolbox
a Matlab toolbox implementing several LSH hash functions, in addition to Kd-Trees, Hierarchical K-Means, and Inverted File search algorithms.
Slash: A C++ LSH library, implementing Spherical LSH by Terasawa, K., Tanaka, Y

LSHBOX: An Open Source C++ Toolbox of Locality-Sensitive Hashing for Large Scale Image Retrieval, Also Support Python and MATLAB.

SRS: A C++ Implementation of An In-memory, Space-efficient Approximate Nearest Neighbor Query Processing Algorithm based on p-stable Random Projection

TLSH open source on Github

JavaScript port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as node.js module
*
Java port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as maven package
{{DEFAULTSORT:Locality Sensitive Hashing Search algorithms Classification algorithms Dimension reduction Hashing Probabilistic data structures