In
computer science, lattice problems are a class of
optimization problems related to mathematical objects called
lattices. The conjectured intractability of such problems is central to the construction of secure
lattice-based cryptosystems: Lattice problems are an example of
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problems which have been shown to be
average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against
quantum computers. For applications in such
cryptosystems, lattices over
vector space (often
) or
free module
In mathematics, a free module is a module that has a basis – that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in t ...
s (often
) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a
basis for the vector space ''V'' and a
norm ''N''. The norm usually considered is the Euclidean norm
''L''2. However, other norms (such as
''L''p) are also considered and show up in a variety of results. Let
denote the length of the shortest non-zero vector in the lattice ''L'', that is,
:
Shortest vector problem (SVP)

In the SVP, a
basis of a
vector space ''V'' and a
norm ''N'' (often
''L''2) are given for a lattice ''L'' and one must find the shortest non-zero vector in ''V'', as measured by ''N'', in ''L''. In other words, the algorithm should output a non-zero vector ''v'' such that
.
In the γ-approximation version SVP
γ, one must find a non-zero lattice vector of length at most
for given
.
Hardness results
The exact version of the problem is only known to be
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
for randomized reductions.
By contrast, the corresponding problem with respect to the
uniform norm is known to be
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
.
Algorithms for the Euclidean norm
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time (
) and
memory, and algorithms requiring both exponential time and space (
) in the lattice dimension. The former class of algorithms most notably includes lattice enumeration
and random sampling reduction, while the latter includes lattice sieving,
computing the Voronoi cell of the lattice,
and discrete Gaussian sampling. An open problem is whether algorithms for solving exact SVP exist running in single exponential time (
) and requiring memory scaling polynomially in the lattice dimension.
To solve the γ-approximation version SVP
γ for
for the Euclidean norm, the best known approaches are based on using
lattice basis reduction. For large
, the
Lenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values
, the Block Korkine-Zolotarev algorithm (BKZ) is commonly used, where the input to the algorithm (the blocksize
) determines the time complexity and output quality: for large approximation factors
, a small block size
suffices, and the algorithm terminates quickly. For small
, larger
are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most
), and its overall complexity is closely related to the costs of these SVP calls in dimension
.
GapSVP
The problem GapSVP
β consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most
or larger than
, where
can be a fixed function of the dimension of the lattice
. Given a basis for the lattice, the algorithm must decide whether
or
. Like other
promise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSVP
ζ, γ for some functions
. The input to the algorithm is a basis
and a number
. It is assured that all the vectors in the
Gram–Schmidt orthogonalization are of length at least 1, and that
and that
where
is the dimension. The algorithm must accept if
, and reject if
. For large
(i.e.
), the problem is equivalent to GapSVP
γ because a preprocessing done using the
LLL algorithm makes the second condition (and hence,
) redundant.
Closest vector problem (CVP)

In CVP, a basis of a vector space ''V'' and a
metric ''M'' (often
''L''2) are given for a lattice ''L'', as well as a vector ''v'' in ''V'' but not necessarily in ''L''. It is desired to find the vector in ''L'' closest to ''v'' (as measured by ''M''). In the
-approximation version CVP
γ, one must find a lattice vector at distance at most
.
Relationship with SVP
The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an
oracle
An oracle is a person or agency considered to provide wise and insightful counsel or prophetic predictions, most notably including precognition of the future, inspired by deities. As such, it is a form of divination.
Description
The word '' ...
for CVP
γ (defined below), one can solve SVP
γ by making some queries to the oracle. The naive method to find the shortest vector by calling the CVP
γ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVP
γ to CVP
γ is as follows: Suppose that the input to the SVP
γ is the basis for lattice