In computer science, a fingerprinting algorithm is a procedure that maps an arbitrarily large data item (remove, as a computer file) to a much shorter bit string, its ''fingerprint'', that uniquely identifies the original data for all practical purposes just as human fingerprints uniquely identify people for practical purposes. This fingerprint may be used for data deduplication purposes. This is also referred to as ''file fingerprinting'', ''data fingerprinting'', or ''structured data fingerprinting''.
Fingerprints are typically used to avoid the comparison and transmission of bulky data. For instance, a remove, web browser or proxy server can efficiently check whether a remote, by fetching only its fingerprint and comparing it with that of the previously fetched copy.
Fingerprint functions may be seen as high-performance hash functions used to uniquely identify substantial blocks of data where cryptographic functions may be.
Special algorithms exist for audio and video fingerprinting.
Properties
Virtual uniqueness
To serve its intended purposes, a fingerprinting algorithm must be able to capture the identity of a file with virtual certainty. In other words, the probability of a
collision — two files yielding the same fingerprint — must be negligible, compared to the probability of other unavoidable causes of fatal errors (such as the system being destroyed by
war or by a
meteorite
A meteorite is a rock (geology), rock that originated in outer space and has fallen to the surface of a planet or Natural satellite, moon. When the original object enters the atmosphere, various factors such as friction, pressure, and chemical ...
): say, 10
−20 or less.
This requirement is somewhat similar to that of a
checksum function, but is much more stringent. To detect accidental data corruption or transmission errors, it is sufficient that the checksums of the original file and any corrupted version will differ with near certainty, given some statistical model for the errors. In typical situations, this goal is easily achieved with 16- or 32-bit checksums. In contrast, file fingerprints need to be at least
64-bit long to guarantee virtual uniqueness in large file systems (see
birthday attack).
When proving the above requirement, one must take into account that files are generated by highly non-random processes that create complicated dependencies among files. For instance, in a typical business network, one usually finds many pairs or clusters of documents that differ only by minor edits or other slight modifications. A good fingerprinting algorithm must ensure that such "natural" processes generate distinct fingerprints, with the desired level of certainty.
Compounding
Computer files are often combined in various ways, such as concatenation (as in
archive files) or symbolic inclusion (as with the
C preprocessor's directive). Some fingerprinting algorithms allow the fingerprint of a composite file to be computed from the fingerprints of its constituent parts. This "compounding" property may be useful in some applications, such as detecting when a program needs to be recompiled.
Algorithms
Rabin's algorithm
Rabin's fingerprinting algorithm is the prototype of the class.
It is fast and easy to implement, allows compounding, and comes with a mathematically precise analysis of the probability of collision. Namely, the probability of two strings ''r'' and ''s'' yielding the same ''w''-bit fingerprint does not exceed max(, ''r'', ,, ''s'', )/2
''w''-1, where , ''r'', denotes the length of ''r'' in bits. The algorithm requires the previous choice of a ''w''-bit internal "key", and this guarantee holds as long as the strings ''r'' and ''s'' are chosen without knowledge of the key.
Rabin's method is not secure against malicious attacks. An adversarial agent can easily discover the key and use it to modify files without changing their fingerprint.
Cryptographic hash functions
Mainstream
cryptographic
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More gen ...
grade hash functions generally can serve as high-quality fingerprint functions, are subject to intense scrutiny from
cryptanalysts, and have the advantage that they are believed to be safe against malicious attacks.
A drawback of cryptographic hash algorithms such as
MD5 and
SHA is that they take considerably longer to execute than Rabin's fingerprint algorithm. They also lack proven guarantees on the collision probability. Some of these algorithms, notably
MD5, are no longer recommended for secure fingerprinting. They are still useful for error checking, where purposeful data tampering is not a primary concern.
Perceptual hashing
Application examples
NIST
The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical s ...
distributes a software reference library, the American
National Software Reference Library, that uses cryptographic hash functions to fingerprint files and map them to software products. The
HashKeeper database, maintained by the
National Drug Intelligence Center, is a repository of fingerprints of "known to be good" and "known to be bad" computer files, for use in law enforcement applications (e.g. analyzing the contents of seized disk drives).
Content similarity detection
See also
*
Acoustic fingerprinting
*
Automatic content recognition
*
Canvas fingerprinting
*
Digital video fingerprinting
*
TCP/IP stack fingerprinting
*
Device fingerprint
*
Machine Identification Code
*
Error correcting code
*
Public key fingerprint
*
Randomizing function
*
Usage share of web browsers
References
{{Reflist
Identifiers
Fingerprinting algorithms