Hashing-Trick
   HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, feature hashing, also known as the hashing trick (by analogy to the
kernel trick In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pa ...
), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a
hash function A hash function is any Function (mathematics), function that can be used to map data (computing), data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a ...
to the features and using their hash values as indices directly (after a modulo operation), rather than looking the indices up in an
associative array In computer science, an associative array, key-value store, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In math ...
. In addition to its use for encoding non-numeric values, feature hashing can also be used for
dimensionality reduction Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
. This trick is often attributed to Weinberger et al. (2009), but there exists a much earlier description of this method published by John Moody in 1989.


Motivation


Motivating example

In a typical
document classification Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more Class (philosophy), classes or Categorization, categories. This may be do ...
task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a
bag of words The bag-of-words (BoW) model is a model of text which uses an unordered collection (a "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus most of syntax or grammar) but ca ...
(BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a
feature Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
(independent variable) of each of the documents in both the training and test sets. Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry in such a matrix captures the frequency (or weight) of the 'th term of the ''vocabulary'' in document . (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.) Typically, these vectors are extremely sparse—according to
Zipf's law Zipf's law (; ) is an empirical law stating that when a list of measured values is sorted in decreasing order, the value of the -th entry is often approximately inversely proportional to . The best known instance of Zipf's law applies to the ...
. The common approach is to construct, at learning time or prior to that, a ''dictionary'' representation of the vocabulary of the training set, and use that to map words to indices.
Hash table In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps Unique key, keys to Value (computer science), values. ...
s and
trie In computer science, a trie (, ), also known as a digital tree or prefix tree, is a specialized search tree data structure used to store and retrieve strings from a dictionary or set. Unlike a binary search tree, nodes in a trie do not store t ...
s are common candidates for dictionary implementation. E.g., the three documents * ''John likes to watch movies. '' * ''Mary likes movies too.'' * ''John also likes football.'' can be converted, using the dictionary to the term-document matrix : \begin \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end (Punctuation was removed, as is usual in document classification and clustering.) The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows. On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge, Yahoo! Research attempted to use feature hashing for their spam filters. Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.


Mathematical motivation

Mathematically, a token is an element t in a finite (or countably infinite) set T . Suppose we only need to process a finite corpus, then we can put all tokens appearing in the corpus into T , meaning that T is finite. However, suppose we want to process all possible words made of the English letters, then T is countably infinite. Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" function \phi: T\to \R^n. When T is finite, of size , T, = m \leq n, then we can use one-hot encoding to map it into \R^n. First, ''arbitrarily'' enumerate T = \ , then define \phi(t_i) = e_i . In other words, we assign a unique index i to each token, then map the token with index i to the unit basis vector e_i . One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration of T . Given a token t\in T , to compute \phi(t) , we must find out the index i of the token t . Thus, to implement \phi efficiently, we need a fast-to-compute bijection h: T\to \ , then we have \phi(t) = e_ . In fact, we can relax the requirement slightly: It suffices to have a fast-to-compute ''injection'' h: T\to \ , then use \phi(t) = e_ . In practice, there is no simple way to construct an efficient injection h: T\to \ . However, we do not need a strict injection, but only an ''approximate'' injection. That is, when t\neq t' , we should ''probably'' have h(t) \neq h(t') , so that ''probably'' \phi(t) \neq \phi(t') . At this point, we have just specified that h should be a hashing function. Thus we reach the idea of feature hashing.


Algorithms


Feature hashing (Weinberger et al. 2009)

The basic feature hashing algorithm presented in (Weinberger et al. 2009) is defined as follows. First, one specifies two hash functions: the kernel hash h: T\to \, and the sign hash \zeta: T\to \ . Next, one defines the feature hashing function:\phi: T\to \R^n, \quad \phi(t) = \zeta(t) e_ Finally, extend this feature hashing function to strings of tokens by\phi: T^* \to \R^n, \quad \phi(t_1, ..., t_k) =\sum_^k \phi(t_j) where T^* is the set of all finite strings consisting of tokens in T . Equivalently,\phi(t_1, ..., t_k) = \sum_^k \zeta(t_j) e_ = \sum_^n \left(\sum_ \zeta(t_j)\right) e_i


Geometric properties

We want to say something about the geometric property of \phi, but T, by itself, is just a set of tokens, we cannot impose a geometric structure on it except the discrete topology, which is generated by the
discrete metric Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory *Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a g ...
. To make it nicer, we lift it to T\to \R^T, and lift \phi from \phi: T \to \R^n to \phi: \R^T \to \R^n by linear extension: \phi((x_t)_) = \sum_ x_t \zeta(t) e_ = \sum_^n \left(\sum_ x_t \zeta(t)\right) e_i There is an infinite sum there, which must be handled at once. There are essentially only two ways to handle infinities. One may impose a metric, then take its completion, to allow well-behaved infinite sums, or one may demand that nothing is actually infinite, only potentially so. Here, we go for the potential-infinity way, by restricting \R^T to contain only vectors with finite support: \forall (x_t)_ \in \R^T, only finitely many entries of (x_t)_ are nonzero. Define an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
on \R^T in the obvious way: \langle e_t, e_ \rangle = \begin 1, \textt=t', \\ 0, \text \end \quad \langle x, x'\rangle = \sum_ x_t x_ \langle e_t, e_ \rangleAs a side note, if T is infinite, then the inner product space \R^T is not
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
. Taking its completion would get us to a
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
, which allows well-behaved infinite sums. Now we have an inner product space, with enough structure to describe the geometry of the feature hashing function \phi: \R^T \to \R^n. First, we can see why h is called a "kernel hash": it allows us to define a kernel K: T\times T \to \R by K(t, t') = \langle e_, e_\rangleIn the language of the "kernel trick", K is the kernel generated by the "feature map" \varphi : T \to \R^n, \quad \varphi(t) = e_Note that this is not the feature map we were using, which is \phi(t) = \zeta(t) e_ . In fact, we have been using ''another kernel'' K_: T\times T \to \R, defined by K_\zeta (t, t') = \langle \zeta(t)e_, \zeta(t')e_\rangleThe benefit of augmenting the kernel hash h with the binary hash \zeta is the following theorem, which states that \phi is an isometry "on average". The above statement and proof interprets the binary hash function \zeta not as a deterministic function of type T\to \, but as a random binary vector \^T with unbiased entries, meaning that Pr(\zeta(t) = +1) = Pr(\zeta(t) = -1) = \frac 1 2 for any t\in T. This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see


Pseudocode implementation

Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function to the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector. function hashing_vectorizer(features : array of string, N : integer): x := new vector for f in features: h := hash(f) x mod N+= 1 return x Thus, if our feature vector is cat","dog","cat"and hash function is h(x_f)=1 if x_f is "cat" and 2 if x_f is "dog". Let us take the output feature vector dimension () to be 4. Then output will be ,2,1,0 It has been suggested that a second, single-bit output hash function be used to determine the sign of the update value, to counter the effect of
hash collision In computer science, a hash collision or hash clash is when two distinct pieces of data in a hash table share the same hash value. The hash value in this case is derived from a hash function which takes a data input and returns a fixed length of ...
s. If such a hash function is used, the algorithm becomes function hashing_vectorizer(features : array of string, N : integer): x := new vector for f in features: h := hash(f) idx := h mod N if ξ(f)

1: x dx+= 1 else: x dx-= 1 return x
The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of (h, \zeta) pairs and let the learning and prediction algorithms consume such streams; a
linear model In statistics, the term linear model refers to any model which assumes linearity in the system. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, t ...
can then be implemented as a single hash table representing the coefficient vector.


Extensions and variations


Learned feature hashing

Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash: t\neq t', \phi(t) = \phi(t') = v. A machine learning model trained on feature-hashed words would then have difficulty distinguishing t and t', essentially because v is
polysemic Polysemy ( or ; ) is the capacity for a Sign (semiotics), sign (e.g. a symbol, morpheme, word, or phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word h ...
. If t' is rare, then performance degradation is small, as the model could always just ignore the rare case, and pretend all v means t. However, if both are common, then the degradation can be serious. To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors.


Applications and practical performance

Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. (2009) applied their version of feature hashing to
multi-task learning Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction ac ...
, and in particular,
spam filter Email filtering is the processing of email to organize it according to specified criteria. The term can apply to the intervention of human intelligence, but most often refers to the automatic processing of messages at an SMTP server, possibly ap ...
ing, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up. Chen et al. (2015) combined the idea of feature hashing and
sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse ...
to construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrix M\in \R^ as a dictionary, with keys in n\times n, and values in \R. Then, as usual in hashed dictionaries, one can use a hash function h: \N\times \N\to m, and thus represent a matrix as a vector in \R^m, no matter how big n is. With virtual matrices, they constructed ''HashedNets'', which are large neural networks taking only small amounts of storage.


Implementations

Implementations of the hashing trick are present in: *
Apache Mahout Apache Mahout is a project of the Apache Software Foundation to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily on linear algebra. In the past, many of the implementations use th ...
* Gensim *
scikit-learn scikit-learn (formerly scikits.learn and also known as sklearn) is a free and open-source machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support ...
* sofia-ml * Vowpal Wabbit *
Apache Spark Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Originally developed at the University of Californ ...
* R *
TensorFlow TensorFlow is a Library (computing), software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for Types of artificial neural networks#Training, training and Statistical infer ...

Dask-ML
ref>


See also

* * * * *


References

{{Reflist, 30em


External links



on John Langford's website
What is the "hashing trick"? - MetaOptimize Q+A
Hashing Machine learning Articles with example pseudocode