HOME

TheInfoList



OR:

In computer science, locality-sensitive hashing (LSH) is an algorithmic technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for
data clustering Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
and nearest neighbor search. It differs from conventional hashing techniques in that
hash collision In computer science, a hash collision or hash clash is when two pieces of data in a hash table share the same hash value. The hash value in this case is derived from a hash function which takes a data input and returns a fixed length of bits. Al ...
s are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items. Hashing-based approximate nearest neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).


Definitions

An ''LSH family'' \mathcal F is defined for * a
metric space In mathematics, a metric space is a set together with a notion of '' distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general set ...
\mathcal M =(M, d), * a threshold R>0, * an approximation factor c>1, * and probabilities P_1 and P_2. This family \mathcal F is a set of functions h\colon M \to S that map elements of the metric space to buckets s \in S. An LSH family must satisfy the following conditions for any two points p, q \in M and any hash function h chosen uniformly at random from \mathcal F: * if d(p,q) \le R, then h(p)=h(q) (i.e., and collide) with probability at least P_1, * if d(p,q) \ge cR, then h(p)=h(q) with probability at most P_2. A family is interesting when P_1>P_2. Such a family \mathcal F is called ''(R,cR,P_1,P_2)-sensitive''. Alternatively it is defined with respect to a universe of items that have a similarity function \phi\colon U \times U \to ,1/math>. An LSH scheme is a family of
hash function A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually ...
s coupled with a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
over the functions such that a function h \in H chosen according to satisfies the property that Pr_ (a) = h(b)= \phi(a,b) for any a,b \in U.


Locality-preserving hashing

A locality-preserving hash is a
hash function A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually ...
that maps points in a
metric space In mathematics, a metric space is a set together with a notion of '' distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general set ...
\mathcal M =(M, d) to a scalar value such that :d(p,q) < d(q,r) \Rightarrow , f(p) - f(q), < , f(q) - f(r), for any three points p,q,r \in M. In other words, these are hash functions where the relative distance between the input values is preserved in the relative distance between the output hash values; input values that are closer to each other will produce output hash values that are closer to each other. This is in contrast to
cryptographic Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
hash functions and
checksum A checksum is a small-sized block of data derived from another block of digital data for the purpose of detecting errors that may have been introduced during its transmission or storage. By themselves, checksums are often used to verify data ...
s, which are designed to have random output difference between adjacent inputs. The first family of locality-preserving hash functions was devised as a way to facilitate data pipelining in implementations of parallel random-access machine (PRAM) algorithms that use universal hashing to reduce memory contention and
network congestion Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of ...
. Locality preserving hashes are related to
space-filling curve In mathematical analysis, a space-filling curve is a curve whose range contains the entire 2-dimensional unit square (or more generally an ''n''-dimensional unit hypercube). Because Giuseppe Peano (1858–1932) was the first to discover one, spac ...
s.


Amplification

Given a (d_1, d_2, p_1, p_2)-sensitive family \mathcal F, we can construct new families \mathcal G by either the AND-construction or OR-construction of \mathcal F. To create an AND-construction, we define a new family \mathcal G of hash functions , where each function is constructed from random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if all h_i(x) = h_i(y) for i = 1, 2, \ldots, k. Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, p_^k, p_^k)-sensitive family. To create an OR-construction, we define a new family \mathcal G of hash functions , where each function is constructed from random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if h_i(x) = h_i(y) for one or more values of . Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, 1- (1 - p_1)^k, 1 - (1 - p_2)^k)-sensitive family.


Applications

LSH has been applied to several problem domains, including: *Near-duplicate detection *
Hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into tw ...
*
Genome-wide association study In genomics, a genome-wide association study (GWA study, or GWAS), also known as whole genome association study (WGA study, or WGAS), is an observational study of a genome-wide set of genetic variants in different individuals to see if any varian ...
*Image similarity identification **
VisualRank VisualRank is a system for finding and ranking images by analysing and comparing their content, rather than searching image names, Web links or other text. Google scientists made their VisualRank work public in a paper describing applying PageRa ...
*
Gene expression Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, protein or non-coding RNA, and ultimately affect a phenotype, as the final effect. The ...
similarity identification *Audio similarity identification * Nearest neighbor search *
Audio fingerprint An acoustic fingerprint is a condensed digital summary, a fingerprint, deterministically generated from an audio signal, that can be used to identify an audio sample or quickly locate similar items in an audio database. Practical uses of ac ...
*
Digital video fingerprinting Video fingerprinting or video hashing are a class of dimension reduction techniques in which a system identifies, extracts, and then summarizes characteristic components of a video as a unique or a set of multiple perceptual hashes, enabling tha ...
*Physical data organization in database management systems *Training fully connected neural networks *Computer security


Methods


Bit sampling for Hamming distance

One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chang ...
over -dimensional vectors \^d. Here, the family \mathcal F of hash functions is simply the family of all the projections of points on one of the d coordinates, i.e., =\, where x_i is the ith coordinate of x. A random function h from simply selects a random bit from the input point. This family has the following parameters: P_1=1-R/d, P_2=1-cR/d.


Min-wise independent permutations

Suppose is composed of subsets of some ground set of enumerable items and the similarity function of interest is the
Jaccard index The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets. It was developed by Grove Karl Gilbert in 1884 as his ratio of verification (v) and now is frequ ...
. If is a permutation on the indices of , for A \subseteq S let h(A) = \min_ \. Each possible choice of defines a single hash function mapping input sets to elements of . Define the function family to be the set of all such functions and let be the
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence See also * * Homogeneous distribution In mathematics, a homogeneous distribution ...
. Given two sets A,B \subseteq S the event that h(A) = h(B) corresponds exactly to the event that the minimizer of over A \cup B lies inside A \cap B. As was chosen uniformly at random, Pr (A) = h(B)= J(A,B)\, and (H,D)\, define an LSH scheme for the Jaccard index. Because the
symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group \ ...
on elements has size !, choosing a truly
random permutation A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and sim ...
from the full symmetric group is infeasible for even moderately sized . Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen . It has been established that a min-wise independent family of permutations is at least of size \operatorname\ \ge e^, and that this bound is tight. Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most . Approximate min-wise independence differs from the property by at most a fixed .


Open source methods


Nilsimsa Hash

Nilsimsa is a locality-sensitive hashing algorithm used in
anti-spam Various anti-spam techniques are used to prevent email spam (unsolicited bulk email). No technique is a complete solution to the spam problem, and each has trade-offs between incorrectly rejecting legitimate email (false positives) as opposed to ...
efforts. The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements: # The digest identifying each message should not vary significantly for changes that can be produced automatically. # The encoding must be robust against intentional attacks. # The encoding should support an extremely low risk of false positives.


TLSH

TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar. Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash. An implementation of TLSH is available as
open-source software Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. Open ...
.


Random projection

The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos) is designed to approximate the cosine distance between vectors. The basic idea of this technique is to choose a random
hyperplane In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperp ...
(defined by a normal unit vector ) at the outset and use the hyperplane to hash input vectors. Given an input vector and a hyperplane defined by , we let h(v) = sgn(v \cdot r). That is, h(v) = \pm 1 depending on which side of the hyperplane lies. Each possible choice of defines a single function. Let be the set of all such functions and let be the uniform distribution once again. It is not difficult to prove that, for two vectors u,v, Pr (u) = h(v)= 1 - \frac, where \theta(u,v) is the angle between and . 1 - \frac is closely related to \cos(\theta(u,v)). In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.


Stable distributions

The hash function h_ (\boldsymbol) : \mathcal^d \to \mathcal maps a -dimensional vector \boldsymbol onto the set of integers. Each hash function in the family is indexed by a choice of random \mathbf and b where \mathbf is a -dimensional vector with entries chosen independently from a
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
and b is a real number chosen uniformly from the range ,r For a fixed \mathbf,b the hash function h_ is given by h_ (\boldsymbol) = \left \lfloor \frac \right \rfloor . Other construction methods for hash functions have been proposed to better fit the data. In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.


Semantic hashing

Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher
semantic similarity Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools ...
. The hashcodes are found via training of an
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
or
graphical model A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability ...
.


LSH algorithm for nearest neighbor search

One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family \mathcal F. The algorithm has two main parameters: the width parameter and the number of hash tables . In the first step, we define a new family \mathcal G of hash functions , where each function is obtained by concatenating functions h_1, \ldots, h_k from \mathcal F, i.e., g(p) = _1(p), \ldots, h_k(p)/math>. In other words, a random hash function is obtained by concatenating randomly chosen hash functions from \mathcal F. The algorithm then constructs hash tables, each corresponding to a different randomly chosen hash function . In the preprocessing step we hash all -dimensional points from the data set into each of the hash tables. Given that the resulting hash tables have only non-zero entries, one can reduce the amount of memory used per each hash table to O(n) using standard
hash functions A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually ...
. Given a query point , the algorithm iterates over the hash functions . For each considered, it retrieves the data points that are hashed into the same bucket as . The process is stopped as soon as a point within distance cR from is found. Given the parameters and , the algorithm has the following performance guarantees: * preprocessing time: O(nLkt), where is the time to evaluate a function h \in \mathcal F on an input point ; * space: O(nL), plus the space for storing data points; * query time: O(L(kt+dnP_2^k)); * the algorithm succeeds in finding a point within distance cR from (if there exists a point within distance ) with probability at least 1 - ( 1 - P_1^k ) ^ L; For a fixed approximation ratio c=1+\epsilon and probabilities P_1 and P_2, one can set k=\big\lceil\big\rceil and L = \lceil P_1^\rceil = O(n^P_1^), where \rho=. Then one obtains the following performance guarantees: * preprocessing time: O(n^P_1^kt); * space: O(n^P_1^), plus the space for storing data points; * query time: O(n^P_1^(kt+d));


Improvements

When is large, it is possible to reduce the hashing time from O(n^). This was shown by and which gave * query time: O(t\log^2(1/P_2)/P_1 + n^(d + 1/P_1); * space: O(n^/P_1 + \log^2(1/P_2)/P_1); It is also sometimes the case that the factor 1/P_1 can be very large. This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query. In it was shown how to reduce the query time to O(n^\rho/P_1^) (not including hashing costs) and similarly the space usage.


See also

*
Bloom filter A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in ...
*
Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The ...
*
Feature hashing In machine learning, feature hashing, also known as the hashing trick (by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applyi ...
*
Fourier-related transforms This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized in th ...
*
Geohash Geohash is a public domain geocode system invented in 2008 by Gustavo NiemeyerEvidences at the Wayback Machine: labix.org in 2008, the G. Niemeyer's blog announcing Geohash *an article about Geohash witnessing and citing G. Niemeyer works, before ...
*
Multilinear subspace learning Multilinear subspace learning is an approach to dimensionality reduction.M. A. O. Vasilescu, D. Terzopoulos (2003"Multilinear Subspace Analysis of Image Ensembles" "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVP ...
*
Principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
*
Random indexing Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimension ...
Gorman, James, and James R. Curran
"Scaling distributional similarity to large corpora."
Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006.
*
Rolling hash A rolling hash (also known as recursive hashing or rolling checksum) is a hash function where the input is hashed in a window that moves through the input. A few hash functions allow a rolling hash to be computed very quickly—the new hash value ...
*
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
*
Sparse distributed memory Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. It is a generalized random-access memory (RAM) for long (e.g., 1,000 bit) binary words ...
* Wavelet compression


References


Further reading

*Samet, H. (2006) ''Foundations of Multidimensional and Metric Data Structures''. Morgan Kaufmann. * *


External links


Alex Andoni's LSH homepage

LSHKIT: A C++ Locality Sensitive Hashing Library

A Python Locality Sensitive Hashing library that optionally supports persistence via redis

Caltech Large Scale Image Search Toolbox
a Matlab toolbox implementing several LSH hash functions, in addition to Kd-Trees, Hierarchical K-Means, and Inverted File search algorithms.
Slash: A C++ LSH library, implementing Spherical LSH by Terasawa, K., Tanaka, Y

LSHBOX: An Open Source C++ Toolbox of Locality-Sensitive Hashing for Large Scale Image Retrieval, Also Support Python and MATLAB.

SRS: A C++ Implementation of An In-memory, Space-efficient Approximate Nearest Neighbor Query Processing Algorithm based on p-stable Random Projection

TLSH open source on Github

JavaScript port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as node.js module
*
Java port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as maven package
{{DEFAULTSORT:Locality Sensitive Hashing Search algorithms Classification algorithms Dimension reduction Hashing Probabilistic data structures