A primality test is an
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for determining whether an input number is
prime
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
. Among other fields of
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, it is used for
cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
. Unlike
integer factorization
In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is a comp ...
, primality tests do not generally give
prime factor
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
s, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its
running time is
polynomial
In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
in the size of the input). Some primality tests prove that a number is prime, while others like
Miller–Rabin prove that a number is
composite. Therefore, the latter might more accurately be called ''compositeness tests'' instead of primality tests.
Simple methods
The simplest primality test is ''
trial division'': given an input number,
, check whether it is
divisible
In mathematics, a divisor of an integer n, also called a factor of n, is an integer m that may be multiplied by some integer to produce n. In this case, one also says that n is a '' multiple'' of m. An integer n is divisible or evenly divisibl ...
by any
prime number
A prime number (or a prime) is a natural number greater than 1 that is not a Product (mathematics), product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime ...
between 2 and
(i.e., whether the division leaves no
remainder
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient ( integer division). In a ...
). If so, then
is
composite. Otherwise, it is prime.
[Riesel (1994) pp.2-3] For any divisor
, there must be another divisor
, and a prime divisor
of
, and therefore looking for prime divisors at most
is sufficient.
For example, consider the number 100, whose divisors are these numbers:
:1, 2, 4, 5, 10, 20, 25, 50, 100.
When all possible divisors up to
are tested, some divisors will be discovered ''twice''. To observe this, consider the list of divisor pairs of 100:
:
.
Products past
are the reverse of products that appeared earlier. For example,
and
are the reverse of each other. Further, that of the two divisors,
and
. This observation generalizes to all
: all divisor pairs of
contain a divisor less than or equal to
, so the algorithm need only search for divisors less than or equal to
to guarantee detection of all divisor pairs.
Also, 2 is a prime dividing 100, which immediately proves that 100 is not prime. Every positive integer except 1 is divisible by at least one prime number by the
Fundamental Theorem of Arithmetic
In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 is prime or can be represented uniquely as a product of prime numbers, ...
. Therefore the algorithm need only search for ''prime'' divisors less than or equal to
.
For another example, consider how this algorithm determines the primality of 17. One has
, and the only primes
are 2 and 3. Neither divides 17, proving that 17 is prime. For a last example, consider 221. One has
, and the primes
are 2, 3, 5, 7, 11, and 13. Upon checking each, one discovers that
, proving that 221 is not prime.
In cases where it is not feasible to compute the list of primes
, it is also possible to simply (and slowly) check all numbers between
and
for divisors. A simple improvement is to test divisibility by 2 and by just the odd numbers between 3 and
, since divisibility by an even number implies divisibility by 2.
This method can be improved further. Observe that all primes greater than 5 are of the form
for a nonnegative integer
and
. Indeed, every integer is of the form
for a positive integer
and
. Since 2 divides
, and
, and 3 divides
and
, the only possible remainders mod 6 for a prime greater than 3 are 1 and 5. So, a more efficient primality test for
is to test whether
is divisible by 2 or 3, then to check through all numbers of the form
and
which are
. This is almost three times as fast as testing all numbers up to
.
Generalizing further, all primes greater than
(
c primorial) are of the form
for
positive integers,
, and
coprime
In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equiv ...
to
. For example, consider
. All integers are of the form
for
integers with
. Now, 2 divides
, 3 divides
, and 5 divides
. Thus all prime numbers greater than 30 are of the form
for
. Of course, not all numbers of the form
with
coprime to
are prime. For example,
is not prime, even though 17 is coprime to
.
As
grows, the fraction of coprime remainders to remainders decreases, and so the time to test
decreases (though it still necessary to check for divisibility by all primes that are less than
). Observations analogous to the preceding can be applied
recursively, giving the
Sieve of Eratosthenes
In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit.
It does so by iteratively marking as composite number, composite (i.e., not prime) the multiples of each prime, starting with ...
.
One way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, such as all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes or by an algorithm that tests each incremental
against all known primes
). Then, before testing
for primality with a large-scale method,
can first be checked for divisibility by any prime from the list. If it is divisible by any of those numbers then it is composite, and any further tests can be skipped.
A simple but inefficient primality test uses
Wilson's theorem
In algebra and number theory, Wilson's theorem states that a natural number ''n'' > 1 is a prime number if and only if the product of all the positive integers less than ''n'' is one less than a multiple of ''n''. That is (using the notations of ...
, which states that
is prime if and only if:
:
Although this method requires about
modular multiplications, rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods.
Heuristic tests
These are tests that seem to work well in practice, but are unproven and therefore are not, technically speaking, algorithms at all.
The
Fermat primality test and the Fibonacci test are simple examples, and they are effective when combined.
John Selfridge has conjectured that if ''p'' is an odd number, and ''p'' ≡ ±2 (mod 5), then ''p'' will be prime if both of the following hold:
* 2
''p''−1 ≡ 1 (mod ''p''),
* ''f''
''p''+1 ≡ 0 (mod ''p''),
where ''f
k'' is the ''k''-th
Fibonacci number
In mathematics, the Fibonacci sequence is a Integer sequence, sequence in which each element is the sum of the two elements that precede it. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted . Many w ...
. The first condition is the Fermat primality test using base 2.
In general, if ''p'' ≡ a (mod ''x''
2+4), where ''a'' is a quadratic non-residue (mod ''x''
2+4) then ''p'' should be prime if the following conditions hold:
* 2
''p''−1 ≡ 1 (mod ''p''),
* ''f''(''1'')
''p''+1 ≡ 0 (mod ''p''),
''f''(''x'')
''k'' is the ''k''-th
Fibonacci polynomial at ''x''.
Selfridge,
Carl Pomerance and
Samuel Wagstaff together offer $620 for a counterexample.
Probabilistic tests
Probabilistic tests are more rigorous than heuristics in that they provide provable bounds on the probability of being fooled by a composite number.
Multiple popular primality tests are probabilistic tests. These tests use, apart from the tested number ''n'', some other numbers ''a'' which are chosen at random from some
sample space
In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
; the usual randomized primality tests never report a prime number as composite, but it is possible for a composite number to be reported as prime. The probability of error can be reduced by repeating the test with several independently chosen values of ''a''; for two commonly used tests, for ''any'' composite ''n'' at least half the ''a''s detect ''n''s compositeness, so ''k'' repetitions reduce the error probability to at most 2
−''k'', which can be made arbitrarily small by increasing ''k''.
The basic structure of randomized primality tests is as follows:
#Randomly pick a number ''a''.
#Check equality (corresponding to the chosen test) involving ''a'' and the given number ''n''. If the equality fails to hold true, then ''n'' is a composite number and ''a'' is a ''witness'' for the compositeness, and the test stops.
#Get back to the step one until the required accuracy is reached.
After one or more iterations, if ''n'' is not found to be a composite number, then it can be declared
probably prime.
Fermat primality test
The simplest probabilistic primality test is the
Fermat primality test (actually a compositeness test). It works as follows:
:Given an integer ''n'', choose some integer ''a'' coprime to ''n'' and calculate ''a
n''
− 1 modulo
In computing and mathematics, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, the latter being called the '' modulus'' of the operation.
Given two positive numbers and , mo ...
''n''. If the result is different from 1, then ''n'' is composite. If it is 1, then ''n'' may be prime.
If ''a
n''
−1 (modulo ''n'') is 1 but ''n'' is not prime, then ''n'' is called a
pseudoprime to base ''a''. In practice, if
''a
n''
−1 (modulo ''n'')
is 1, then ''n'' is usually prime. But here is a counterexample:
if ''n'' = 341 and ''a'' = 2, then
:
even though 341 = 11·31 is composite. In fact, 341 is the smallest pseudoprime base 2 (see Figure 1 of
).
There are only 21853 pseudoprimes base 2 that are less than 2.5 (see page 1005 of
). This means that, for ''n'' up to 2.5, if ''2
n''
−1 (modulo ''n'') equals 1, then ''n'' is prime, unless ''n'' is one of these 21853 pseudoprimes.
Some composite numbers (
Carmichael number
In number theory, a Carmichael number is a composite number which in modular arithmetic satisfies the congruence relation:
: b^n\equiv b\pmod
for all integers . The relation may also be expressed in the form:
: b^\equiv 1\pmod
for all integers b ...
s) have the property that ''a
n''
− 1 is 1 (modulo ''n'') for every ''a'' that is coprime to ''n''. The smallest example is ''n'' = 561 = 3·11·17, for which ''a
560'' is 1 (modulo 561) for all ''a'' coprime to 561. Nevertheless, the Fermat test is often used if a rapid screening of numbers is needed, for instance in the key generation phase of the
RSA public key cryptographic algorithm.
Miller–Rabin and Solovay–Strassen primality test
The
Miller–Rabin primality test
The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen pr ...
and
Solovay–Strassen primality test are more sophisticated variants, which detect all composites (once again, this means: for ''every'' composite number ''n'', at least 3/4 (Miller–Rabin) or 1/2 (Solovay–Strassen) of numbers ''a'' are witnesses of compositeness of ''n''). These are also compositeness tests.
The Miller–Rabin primality test works as follows:
Given an integer ''n'', choose some positive integer ''a'' < ''n''. Let 2
''s''''d'' = ''n'' − 1, where ''d'' is odd. If
:
and
:
for all
then ''n'' is composite and ''a'' is a witness for the compositeness. Otherwise, ''n'' may or may not be prime.
The Miller–Rabin test is a
strong probable prime test (see PSW
page 1004).
The Solovay–Strassen primality test uses another equality: Given an odd number ''n'', choose some integer ''a'' < ''n'', if
:
, where
is the
Jacobi symbol
Jacobi symbol for various ''k'' (along top) and ''n'' (along left side). Only are shown, since due to rule (2) below any other ''k'' can be reduced modulo ''n''. Quadratic residues are highlighted in yellow — note that no entry with a ...
,
then ''n'' is composite and ''a'' is a witness for the compositeness. Otherwise, ''n'' may or may not be prime.
The Solovay–Strassen test is an
Euler probable prime test (see PSW
page 1003).
For each individual value of ''a'', the Solovay–Strassen test is weaker than the Miller–Rabin test. For example, if ''n'' = 1905 and ''a'' = 2, then the Miller-Rabin test shows that ''n'' is composite, but the Solovay–Strassen test does not. This is because 1905 is an Euler
pseudoprime base 2 but not a strong pseudoprime base 2 (this is illustrated in Figure 1 of PSW
).
Frobenius primality test
The Miller–Rabin and the Solovay–Strassen primality tests are simple and are much faster than other general primality tests. One method of improving efficiency further in some cases is the
Frobenius pseudoprimality test; a round of this test takes about three times as long as a round of Miller–Rabin, but achieves a probability bound comparable to seven rounds of Miller–Rabin.
The Frobenius test is a generalization of the
Lucas probable prime test.
Baillie–PSW primality test
The
Baillie–PSW primality test
The Baillie–PSW primality test is a probabilistic or possibly deterministic primality testing algorithm that determines whether a number is composite or is a probable prime. It is named after Robert Baillie, Carl Pomerance, John Selfridge, ...
is a probabilistic primality test that combines a Fermat or Miller–Rabin test with a
Lucas probable prime test to get a primality test that has no known counterexamples. That is, there are no known composite ''n'' for which this test reports that ''n'' is probably prime.
It has been shown that there are no counterexamples for ''n''
.
Other tests
Leonard Adleman
Leonard Adleman (born December 31, 1945) is an American computer scientist. He is one of the creators of the RSA encryption algorithm, for which he received the 2002 Turing Award. He is also known for the creation of the field of DNA computin ...
and Ming-Deh Huang presented an errorless (but expected polynomial-time) variant of the
elliptic curve primality test. Unlike the other probabilistic tests, this algorithm produces a
primality certificate, and thus can be used to prove that a number is prime.
The algorithm is prohibitively slow in practice.
If
quantum computers were available, primality could be tested
asymptotically faster than by using classical computers. A combination of
Shor's algorithm Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong ...
, an integer factorization method, with the
Pocklington primality test
In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer.
The test uses a partial factorization of N - 1 to prove that an integer N is prime.
It produces a pri ...
could solve the problem in
.
Fast deterministic tests
Near the beginning of the 20th century, it was shown that a corollary of
Fermat's little theorem
In number theory, Fermat's little theorem states that if is a prime number, then for any integer , the number is an integer multiple of . In the notation of modular arithmetic, this is expressed as
a^p \equiv a \pmod p.
For example, if and , t ...
could be used to test for primality. This resulted in the
Pocklington primality test
In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer.
The test uses a partial factorization of N - 1 to prove that an integer N is prime.
It produces a pri ...
. However, as this test requires a partial
factorization
In mathematics, factorization (or factorisation, see American and British English spelling differences#-ise, -ize (-isation, -ization), English spelling differences) or factoring consists of writing a number or another mathematical object as a p ...
of ''n'' − 1 the running time was still quite slow in the worst case. The first
deterministic
Determinism is the metaphysical view that all events within the universe (or multiverse) can occur only in one possible way. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping mo ...
primality test significantly faster than the naive methods was the
cyclotomy test; its runtime can be proven to be
O((log ''n'')
''c'' log log log ''n''), where ''n'' is the number to test for primality and ''c'' is a constant independent of ''n''. A number of further improvements were made, but none could be proven to have polynomial running time. (Running time is measured in terms of the size of the input, which in this case is ~ log ''n'', that being the number of bits needed to represent the number ''n''.) The
elliptic curve primality test can be proven to run in O((log ''n'')
6), if some conjectures on
analytic number theory
In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dir ...
are true. Similarly, under the
generalized Riemann hypothesis
The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global ''L''-functions, whi ...
(which Miller, confusingly, calls the "
extended Riemann hypothesis"), the deterministic
Miller's test, which forms the basis of the probabilistic Miller–Rabin test, can be proved to run in
Õ((log ''n'')
4). In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these two methods is rather difficult and creates a risk of programming errors, slower but simpler tests are often preferred.
In 2002, the first provably unconditional deterministic polynomial time test for primality was invented by
Manindra Agrawal,
Neeraj Kayal, and
Nitin Saxena. The
AKS primality test
The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientist ...
runs in Õ((log ''n'')
12) (improved to Õ((log ''n'')
7.5)
in the published revision of their paper), which can be further reduced to Õ((log ''n'')
6) if the
Sophie Germain conjecture is true.
Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log ''n'')
6) unconditionally.
Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ((log ''n'')
3) if
Agrawal's conjecture is true; however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false.
A modified version of the Agrawal's conjecture, the Agrawal–Popovych conjecture, may still be true.
Complexity
In
computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
, the formal language corresponding to the prime numbers is denoted as PRIMES. It is easy to show that PRIMES is in
Co-NP: its complement COMPOSITES is in NP because one can decide compositeness by nondeterministically guessing a factor.
In 1975,
Vaughan Pratt
Vaughan Pratt (born April 12, 1944) is a Professor, Professor Emeritus at Stanford University, who was an early pioneer in the field of computer science. Since 1969, Pratt has made several contributions to foundational areas such as search algorit ...
showed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was in
NP, and therefore in . See
primality certificate for details.
The subsequent discovery of the Solovay–Strassen and Miller–Rabin algorithms put PRIMES in
coRP. In 1992, the Adleman–Huang algorithm
[ reduced the complexity to , which superseded Pratt's result.
The Adleman–Pomerance–Rumely primality test from 1983 put PRIMES in QP ( quasi-polynomial time), which is not known to be comparable with the classes mentioned above.
Because of its tractability in practice, polynomial-time algorithms assuming the Riemann hypothesis, and other similar evidence, it was long suspected but not proven that primality could be solved in polynomial time. The existence of the ]AKS primality test
The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientist ...
finally settled this long-standing question and placed PRIMES in P. However, PRIMES is not known to be P-complete
In computational complexity theory, a decision problem is P-complete ( complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction.
The notion of P-complete decision problems is use ...
, and it is not known whether it lies in classes lying inside P such as NC or L. It is known that PRIMES is not in AC0.
Number-theoretic methods
Certain number-theoretic methods exist for testing whether a number is prime, such as the Lucas test and Proth's test. These tests typically require factorization of ''n'' + 1, ''n'' − 1, or a similar quantity, which means that they are not useful for general-purpose primality testing, but they are often quite powerful when the tested number ''n'' is known to have a special form.
The Lucas test relies on the fact that the multiplicative order
In number theory, given a positive integer ''n'' and an integer ''a'' coprime to ''n'', the multiplicative order of ''a'' modulo ''n'' is the smallest positive integer ''k'' such that a^k\ \equiv\ 1 \pmod n.
In other words, the multiplicative orde ...
of a number ''a'' modulo ''n'' is ''n'' − 1 for a prime ''n'' when ''a'' is a primitive root modulo n
In modular arithmetic, a number is a primitive root modulo if every number coprime to is congruent to a power of modulo . That is, is a ''primitive root modulo'' if for every integer coprime to , there is some integer for which ...
. If we can show ''a'' is primitive for ''n'', we can show ''n'' is prime.
References
Sources
* Chapter 3: Recognizing Primes and Composites, pp. 109–158. Chapter 4: Primality Proving, pp. 159–190. Section 7.6: Elliptic curve primality proving (ECPP), pp. 334–340.
*
*
*
*
External links
* – Implementation of the Solovay-Strassen primality test in Maple
Distinguishing prime numbers from composite numbers, by D.J. Bernstein (cr.yp.to)
The Prime Pages (primes.utm.edu)
*
PRIMABOINCA
is a research project that uses Internet-connected computers to search for a counterexample
A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "student John Smith is not lazy" is a c ...
to some conjectures. The first conjecture ( Agrawal's conjecture) was the basis for the formulation of the first deterministic prime test algorithm in polynomial time ( AKS algorithm).
{{DEFAULTSORT:Primality Test
*
Asymmetric-key algorithms