Locality-sensitive hashing

From formulasearchengine
Jump to navigation Jump to search

Locality-sensitive hashing (LSH) is a method of performing probabilistic dimension reduction of high-dimensional data. The basic idea is to hash the input items so that similar items are mapped to the same buckets with high probability (the number of buckets being much smaller than the universe of possible input items). The hashing used in LSH is different from conventional hash functions, such as those used in cryptography, as in the LSH case the goal is to maximize probability of "collision" of similar items rather than avoid collisions. [1] Note how locality-sensitive hashing, in many ways, mirrors data clustering and nearest neighbor search.

Definition

An LSH family [1] [2] [3] is defined for a metric space , a threshold and an approximation factor . This family is a family of functions which map elements from the metric space to a bucket . The LSH family satisfies the following conditions for any two points , using a function which is chosen uniformly at random:

A family is interesting when . Such a family is called -sensitive.

Alternatively[4] it is defined with respect to a universe of items that have a similarity function . An LSH scheme is a family of hash functions coupled with a probability distribution over the functions such that a function chosen according to satisfies the property that for any .

Amplification

Given a -sensitive family , we can construct new families by either the AND-construction or OR-construction of .[1]

To create an AND-construction, we define a new family of hash functions , where each function is constructed from random functions from . We then say that for a hash function , if and only if all for . Since the members of are independently chosen for any , is a -sensitive family.

To create an OR-construction, we define a new family of hash functions , where each function is constructed from random functions from . We then say that for a hash function , if and only if for one or more values of . Since the members of are independently chosen for any , is a -sensitive family.

Applications

LSH has been applied to several problem domains including{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

Methods

Bit sampling for Hamming distance

One of the easiest ways to construct an LSH family is by bit sampling.[3] This approach works for the Hamming distance over d-dimensional vectors . Here, the family of hash functions is simply the family of all the projections of points on one of the coordinates, i.e., , where is the th coordinate of . A random function from simply selects a random bit from the input point. This family has the following parameters: , .

Min-wise independent permutations

{{#invoke:main|main}}

Suppose is composed of subsets of some ground set of enumerable items and the similarity function of interest is the Jaccard index . If is a permutation on the indices of , for let . Each possible choice of defines a single hash function mapping input sets to elements of .

Define the function family to be the set of all such functions and let be the uniform distribution. Given two sets the event that corresponds exactly to the event that the minimizer of over lies inside . As was chosen uniformly at random, and define an LSH scheme for the Jaccard index.

Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" - a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen . It has been established that a min-wise independent family of permutations is at least of size .[10] and that this bound is tight[11]

Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k.[12] Approximate min-wise independence differs from the property by at most a fixed .[13]

Nilsimsa Hash

{{#invoke:main|main}}

Nilsimsa is an anti-spam focused locality-sensitive hashing algorithm.[14] The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. Nilsimsa satisfies three requirements outlined by the paper's authors:

  1. The digest identifying each message should not vary significantly for changes that can be produced automatically.
  2. The encoding must be robust against intentional attacks.
  3. The encoding should support an extremely low risk of false positives.

Random projection

The random projection method of LSH[4] (termed arccos by Andoni and Indyk [15]) is designed to approximate the cosine distance between vectors. The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector ) at the outset and use the hyperplane to hash input vectors.

Given an input vector and a hyperplane defined by , we let . That is, depending on which side of the hyperplane lies.

Each possible choice of defines a single function. Let be the set of all such functions and let be the uniform distribution once again. It is not difficult to prove that, for two vectors , , where is the angle between and . is closely related to .

In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.

Stable distributions

The hash function [16] maps a d dimensional vector onto a set of integers. Each hash function in the family is indexed by a choice of random and where is a d dimensional vector with entries chosen independently from a stable distribution and is a real number chosen uniformly from the range [0,r]. For a fixed the hash function is given by .

Other construction methods for hash functions have been proposed to better fit the data. [17] In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.

LSH algorithm for nearest neighbor search

One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family . The algorithm has two main parameters: the width parameter and the number of hash tables .

In the first step, we define a new family of hash functions , where each function is obtained by concatenating functions from , i.e., . In other words, a random hash function is obtained by concatenating randomly chosen hash functions from . The algorithm then constructs hash tables, each corresponding to a different randomly chosen hash function .

In the preprocessing step we hash all points from the data set into each of the hash tables. Given that the resulting hash tables have only non-zero entries, one can reduce the amount of memory used per each hash table to using standard hash functions.

Given a query point , the algorithm iterates over the hash functions . For each considered, it retrieves the data points that are hashed into the same bucket as . The process is stopped as soon as a point within distance from is found.

Given the parameters and , the algorithm has the following performance guarantees:

For a fixed approximation ratio and probabilities and , one can set and , where . Then one obtains the following performance guarantees:

See also

References

  1. 1.0 1.1 1.2 Template:Cite web
  2. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  3. 3.0 3.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  4. 4.0 4.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  5. {{#invoke:citation/CS1|citation |CitationClass=citation }}.
  6. {{#invoke:citation/CS1|citation |CitationClass=citation }}.
  7. {{#invoke:citation/CS1|citation |CitationClass=citation }}.
  8. {{#invoke:citation/CS1|citation |CitationClass=citation }}
  9. {{#invoke:citation/CS1|citation |CitationClass=citation }}
  10. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  11. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  12. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  13. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  14. Template:Cite web
  15. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  16. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  17. {{#invoke:Citation/CS1|citation |CitationClass=journal }}

Further reading

  • Samet, H. (2006) Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann. ISBN 0-12-369446-9

External links