In
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
and
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, unit in the last place or unit of least precision (ulp) is the spacing between two consecutive
floating-point
In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a Sign (mathematics), signed sequence of a fixed number of digits in some Radix, base) multiplied by an integer power of that ba ...
numbers, i.e., the value the ''
least significant digit'' (rightmost digit) represents if it is 1. It is used as a measure of
accuracy in numeric calculations.
Definition
The most common definition is: In
radix
In a positional numeral system, the radix (radices) or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal system (the most common system in use today) the radix is ten, becaus ...
with precision
, if
, then where
is the minimal exponent of the normal numbers. In particular,
for
normal numbers, and
for
subnormals.
Another definition, suggested by John Harrison, is slightly different:
is the distance between the two closest ''straddling'' floating-point numbers
and
(i.e., satisfying
and
), assuming that the exponent range is not upper-bounded. These definitions differ only at signed powers of the radix.
The
IEEE 754
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic originally established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard #Design rationale, add ...
specification—followed by all modern floating-point hardware—requires that the result of an
elementary arithmetic
Elementary arithmetic is a branch of mathematics involving addition, subtraction, multiplication, and Division (mathematics), division. Due to its low level of abstraction, broad range of application, and position as the foundation of all mathema ...
operation (addition, subtraction, multiplication, division, and
square root
In mathematics, a square root of a number is a number such that y^2 = x; in other words, a number whose ''square'' (the result of multiplying the number by itself, or y \cdot y) is . For example, 4 and −4 are square roots of 16 because 4 ...
since 1985, and
FMA since 2008) be correctly
rounded, which implies that in rounding to nearest, the rounded result is within 0.5 ulp of the mathematically exact result, using John Harrison's definition; conversely, this property implies that the distance between the rounded result and the mathematically exact result is minimized (but for the halfway cases, it is satisfied by two consecutive floating-point numbers). Reputable
numeric libraries compute the basic
transcendental functions to between 0.5 and about 1 ulp. Only a few libraries compute them within 0.5 ulp, this problem being complex due to the
Table-maker's dilemma.
Since the 2010s, advances in floating-point mathematics have allowed correctly rounded functions to be almost as fast in average as these earlier, less accurate functions. A correctly rounded function would also be fully reproducible. which theoretically would only produce one incorrect rounding out of 1000 random floating-point inputs.
Examples
Example 1
Let
be a positive floating-point number and assume that the active rounding mode is
round to nearest, ties to even, denoted
. If
, then
. Otherwise,
or
, depending on the value of the least significant digit and the exponent of
. This is demonstrated in the following
Haskell
Haskell () is a general-purpose, statically typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research, and industrial applications, Haskell pioneered several programming language ...
code typed at an interactive prompt:
> until (\x -> x x+1) (+1) 0 :: Float
1.6777216e7
> it-1
1.6777215e7
> it+1
1.6777216e7
Here we start with 0 in
single precision
Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
A floa ...
(binary32) and repeatedly add 1 until the operation does not change the value. Since the
significand for a single-precision number contains 24 bits, the first integer that is not exactly representable is 2
24+1, and this value rounds to 2
24 in round to nearest, ties to even. Thus the result is equal to 2
24.
Example 2
The following example in
Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
approximates
as a floating-point value by finding the two double values bracketing
:
.
// π with 20 decimal digits
BigDecimal π = new BigDecimal("3.14159265358979323846");
// truncate to a double floating point
double p0 = π.doubleValue();
// -> 3.141592653589793 (hex: 0x1.921fb54442d18p1)
// p0 is smaller than π, so find next number representable as double
double p1 = Math.nextUp(p0);
// -> 3.1415926535897936 (hex: 0x1.921fb54442d19p1)
Then
is determined as
.
// ulp(π) is the difference between p1 and p0
BigDecimal ulp = new BigDecimal(p1).subtract(new BigDecimal(p0));
// -> 4.44089209850062616169452667236328125E-16
// (this is precisely 2**(-51))
// same result when using the standard library function
double ulpMath = Math.ulp(p0);
// -> 4.440892098500626E-16 (hex: 0x1.0p-51)
Example 3
Another example, in
Python, also typed at an interactive prompt, is:
>>> x = 1.0
>>> p = 0
>>> while x != x + 1:
... x = x * 2
... p = p + 1
...
>>> x
9007199254740992.0
>>> p
53
>>> x + 2 + 1
9007199254740996.0
In this case, we start with
x = 1
and repeatedly double it until
x = x + 1
. Similarly to Example 1, the result is 2
53 because the
double-precision floating-point format uses a 53-bit significand.
Language support
The
Boost C++ libraries provides the functions
boost::math::float_next
,
boost::math::float_prior
,
boost::math::nextafter
and
boost::math::float_advance
to obtain nearby (and distant) floating-point values,
and
boost::math::float_distance(a, b)
to calculate the floating-point distance between two doubles.
The
C language library provides functions to calculate the next floating-point number in some given direction:
nextafterf
and
nexttowardf
for
float
,
nextafter
and
nexttoward
for
double
,
nextafterl
and
nexttowardl
for
long double
, declared in
. It also provides the macros
FLT_EPSILON
,
DBL_EPSILON
,
LDBL_EPSILON
, which represent the positive difference between 1.0 and the next greater representable number in the corresponding type (i.e. the ulp of one).
The
Go standard library provides the functions
math.Nextafter
(for 64 bit floats) and
math.Nextafter32
(for 32 bit floats) both of which return the next representable floating-point value towards another provided floating-point value.
The
Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
standard library provides the functions and . They were introduced with Java 1.5.
The
Swift standard library provides access to the next floating-point number in some given direction via the instance properties
nextDown
and
nextUp
. It also provides the instance property
ulp
and the type property
ulpOfOne
(which corresponds to C macros like
FLT_EPSILON
) for Swift's floating-point types.
See also
*
IEEE 754
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic originally established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard #Design rationale, add ...
*
ISO/IEC 10967, part 1 requires an ulp function
*
Least significant bit (LSB)
*
Machine epsilon
*
Round-off error
References
Bibliography
*Goldberg, David (1991–03). "Rounding Error" in "What Every Computer Scientist Should Know About Floating-Point Arithmetic". Computing Surveys, ACM, March 1991. Retrieved from http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#689.
*{{Cite book, title=Handbook of floating-point arithmetic, last=Muller, first=Jean-Michel, publisher=Birkhäuser, year=2010, isbn=978-0-8176-4704-9, location=Boston, pages=32–37
Computer arithmetic
Floating point