HOME

TheInfoList



OR:

Extended precision refers to
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can ...
number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to ''extended precision'', arbitrary-precision arithmetic refers to implementations of much larger numeric types (with a storage count that usually is not a power of two) using special software (or, rarely, hardware).


Extended precision implementations

There is a long history of extended floating-point formats reaching back nearly to the middle of the last century. Various manufacturers have used different formats for extended precision for different machines. In many cases the format of the extended precision is not quite the same as a scale-up of the ordinary single- and double-precision formats it is meant to extend. In a few cases the implementation was merely a software-based change in the floating-point data format, but in most cases extended precision was implemented in hardware, either built into the
central processor A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and ...
itself, or more often, built into the hardware of an optional, attached processor called a "
floating-point unit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
" (FPU) or "floating-point processor" ( FPP), accessible to the
CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, a ...
as a fast input / output device.


IBM extended precision formats

The IBM 1130, sold in 1965, offered two floating-point formats: A 32-bit "standard precision" format and a 40-bit "extended precision" format. Standard precision format contains a 24-bit two's complement
significand The significand (also mantissa or coefficient, sometimes also argument, or ambiguously fraction or characteristic) is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on ...
while extended precision utilizes a 32-bit two's complement significand. The latter format makes full use of the CPU's 32-bit integer operations. The characteristic in both formats is an 8-bit field containing the power of two biased by 128. Floating-point arithmetic operations are performed by software, and double precision is not supported at all. The extended format occupies three 16-bit words, with the extra space simply ignored. The
IBM System/360 The IBM System/360 (S/360) is a family of mainframe computer systems that was announced by IBM on April 7, 1964, and delivered between 1965 and 1978. It was the first family of computers designed to cover both commercial and scientific applic ...
supports a 32-bit "short" floating-point format and a 64-bit "long" floating-point format. The 360/85 and follow-on System/370 add support for a 128-bit "extended" format. These formats are still supported in the current
design A design is a plan or specification for the construction of an object or system or for the implementation of an activity or process or the result of that plan or specification in the form of a prototype, product, or process. The verb ''to design' ...
, where they are now called the " hexadecimal floating-point" (HFP) formats.


Microsoft MBF extended precision format

The Microsoft BASIC port for the 6502 CPU, such as in adaptations like Commodore BASIC, AppleSoft BASIC, KIM-1 BASIC or MicroTAN BASIC, supports an extended 40-bit variant of the floating-point format '' Microsoft Binary Format'' (MBF) since 1977.


IEEE 754 extended precision formats

The
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found ...
floating-point standard recommends that implementations provide extended precision formats. The standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementor's choice. The IA32,
x86-64 x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit version of the x86 instruction set, first released in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging ...
, and Itanium processors support an 80-bit "double extended" extended precision format with a 64-bit significand. The
Intel 8087 The Intel 8087, announced in 1980, was the first x87 floating-point coprocessor for the 8086 line of microprocessors. The purpose of the 8087 was to speed up computations for floating-point arithmetic, such as addition, subtraction, multiplic ...
math
coprocessor A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I ...
was the first x86 device which supported floating-point arithmetic in hardware. It was designed to support a 32-bit "single precision" format and a 64-bit "double-precision" format for encoding and interchanging floating-point numbers. The temporary real (extended) format was designed not to store data at higher precision as such, but rather primarily to allow for the computation of double results more reliably and accurately by minimising overflow and roundoff-errors in intermediate calculations. For example, many floating-point algorithms (e.g.
exponentiation Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to ...
) suffer from significant precision loss when computed using the most direct implementations. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit "extended precision" format. The 8087 automatically converts numbers to this format when loading floating-point registers from
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remember ...
and also converts results back to the more conventional formats when storing the registers back into memory. To enable intermediate subexpression results to be saved in extended precision scratch variables and continued across programming language statements, and otherwise interrupted calculations to resume where they were interrupted, it provides instructions which transfer values between these internal registers and memory without performing any conversion, which therefore enables access to the extended format for calculations – also reviving the issue of the accuracy of functions of such numbers, but at a higher precision. The
floating-point unit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
s (FPU) on all subsequent x86 processors have supported this format. As a result, software can be developed which takes advantage of the higher precision provided by this format. William Kahan, a primary designer of the x87 arithmetic and initial IEEE 754 standard proposal notes on the development of the x87 floating point: "An extended format as wide as we dared (80 bits) was included to serve the same support role as the 13 decimal internal format serves in Hewlett-Packard's 10 decimal calculators." Moreover, Kahan notes that 64 bits was the widest significand across which carry propagation could be done without increasing the cycle time on the 8087, and that the x87 extended precision was designed to be extensible to higher precision in future processors: "For now the 10-byte Extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16 byte format. ... That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed." The Motorola 6888x math coprocessors and the
Motorola 68040 The Motorola 68040 ("''sixty-eight-oh-forty''") is a 32-bit microprocessor in the Motorola 68000 series, released in 1990. It is the successor to the 68030 and is followed by the 68060, skipping the 68050. In keeping with general Motorola na ...
and
68060 The Motorola 68060 ("''sixty-eight-oh-sixty''") is a 32-bit microprocessor from Motorola released in 1994. It is the successor to the Motorola 68040 and is the highest performing member of the 68000 series. Two derivatives were produced, the 68L ...
processors support this same 64-bit significand extended precision type (similar to the Intel format although padded to a 96-bit format with 16 unused bits inserted between the exponent and significand fields). The follow-on
Coldfire The NXP ColdFire is a microprocessor that derives from the Motorola 68000 family architecture, manufactured for embedded systems development by NXP Semiconductors. It was formerly manufactured by Freescale Semiconductor (formerly the semiconductor ...
processors do not support this 96-bit extended precision format. The FPA10 math coprocessor for early ARM processors also supports this extended precision type (similar to the Intel format although padded to a 96-bit format with 16 zero bits inserted between the sign and the exponent fields), but without correct rounding. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE 754 double extended format, as does the IEEE 754 128-bit format.


x86 extended precision format

The x86 extended precision format is an 80-bit format first implemented in the
Intel 8087 The Intel 8087, announced in 1980, was the first x87 floating-point coprocessor for the 8086 line of microprocessors. The purpose of the 8087 was to speed up computations for floating-point arithmetic, such as addition, subtraction, multiplic ...
math
coprocessor A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I ...
and is supported by all processors that are based on the x86 design that incorporate a
floating-point unit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
(FPU). This 80-bit format uses one bit for the sign of the significand, 15 bits for the exponent field (i.e. the same range as the 128-bit quadruple precision IEEE 754 format) and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the actual power of 2. An exponent field value of 32767 (all fifteen bits 1) is reserved so as to enable the representation of special states such as
infinity Infinity is that which is boundless, endless, or larger than any natural number. It is often denoted by the infinity symbol . Since the time of the ancient Greeks, the philosophical nature of infinity was the subject of many discussions am ...
and Not a Number. If the exponent field is zero, the value is a
denormal In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal ...
number and the exponent of 2 is −16382. In the following table, "''s''" is the value of the sign bit (0 means positive, 1 means negative), "''e''" is the value of the exponent field interpreted as a positive integer, and "''m''" is the significand interpreted as a positive binary number where the binary point is located between bits 63 and 62. The "''m''" field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/
hidden bit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
. Rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part. Bit 63 will be 1 on all normalized numbers. There were several advantages to this design when the
8087 The Intel 8087, announced in 1980, was the first x87 floating-point coprocessor for the 8086 line of microprocessors. The purpose of the 8087 was to speed up computations for floating-point arithmetic, such as addition, subtraction, multiplicat ...
was being developed: * Calculations can be completed a little faster if all bits of the significand are present in the register. * A 64-bit significand provides sufficient precision to avoid loss of precision when the results are converted back to double-precision format in the vast number of cases. * This format provides a mechanism for indicating precision loss due to underflow which can be carried through further operations. For example, the calculation generates the intermediate result which is a
denormal In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal ...
and also involves precision loss. The product of all of the terms is which can be represented as a normalized number. The
80287 x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that worked in tandem with corresponding x86 CPUs. These ...
could complete this calculation and indicate the loss of precision by returning an "unnormal" result (exponent not 0, bit 63 = 0). Processors since the
80387 x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that worked in tandem with corresponding x86 CPUs. These ...
no longer generate unnormals and do not support unnormal inputs to operations. They will generate a denormal if an underflow occurs but will generate a normalized result if subsequent operations on the denormal can be normalized.


Introduction to use

The 80-bit floating-point format was widely available by 1984, after the development of C, Fortran and similar computer languages, which initially offered only the common 32- and 64-bit floating-point sizes. On the x86 design most C compilers now support 80-bit extended precision via the long double type, and this was specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). Compilers on x86 for other languages often support extended precision as well, sometimes via nonstandard extensions: for example, Turbo Pascal offers an extended type, and several Fortran compilers have a REAL*10 type (analogous to REAL*4 and REAL*8). Such compilers also typically include extended-precision mathematical subroutines, such as
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
and
trigonometric function In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in a ...
s, in their standard libraries.


Working range

The 80-bit floating-point format has a range (including subnormals) from approximately 3.65×10−4951 to 1.18×104932. Although log10(264) ≅ 19.266, this format is usually described as giving approximately eighteen significant digits of precision (the floor of log10(263), the minimum guaranteed precision). The use of decimal when talking about binary is unfortunate because most decimal fractions are recurring sequences in binary just as 2/3 is in decimal. Thus, a value such as 10.15 is represented in binary as equivalent to 10.1499996185 etc. in decimal for REAL*4 but 10.15000000000000035527etc. in REAL*8: interconversion will involve approximation except for those few decimal fractions that represent an exact binary value, such as 0.625. For REAL*10, the decimal string is 10.1499999999999999996530553etc. The last 9 digit is the eighteenth fractional digit and thus the twentieth significant digit of the string. Bounds on conversion between decimal and binary for the 80-bit format can be given as follows: if a decimal string with at most 18 significant digits is correctly rounded to an 80-bit IEEE 754 binary floating-point value (as on input) then converted back to the same number of significant decimal digits (as for output), then the final string will exactly match the original; while, conversely, if an 80-bit IEEE 754 binary floating-point value is correctly converted and (nearest) rounded to a decimal string with at least 21 significant decimal digits then converted back to binary format it will exactly match the original. These approximations are particularly troublesome when specifying the best value for constants in formulae to high precision, as might be calculated via arbitrary-precision arithmetic.


Need for the 80-bit format

A notable example of the need for a minimum of 64 bits of precision in the significand of the extended precision format is the need to avoid precision loss when performing exponentiation on
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
values. The x86 floating-point units do not provide an instruction that directly performs
exponentiation Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to ...
. Instead they provide a set of instructions that a program can use in sequence to perform exponentiation using the equation: :x^y = 2^ In order to avoid precision loss, the intermediate results "" and "" must be computed with much higher precision, because effectively both the exponent and the significand fields of must fit into the significand field of the intermediate result. Subsequently, the significand field of the intermediate result is split between the exponent and significand fields of the final result when is calculated. The following discussion describes this requirement in more detail. With a little unpacking, an
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found ...
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
value can be represented as: :2^\,\cdot\,M\ where is the sign of the exponent (either 0 or 1), is the unbiased exponent, which is an integer that ranges from 0 to 1023, and is the significand which is a 53-bit value that falls in the range . Negative numbers and zero can be ignored because the logarithm of these values is undefined. For purposes of this discussion does not have 53 bits of precision because it is constrained to be greater than or equal to one i.e. the hidden bit does not count towards the precision (Note that in situations where is less than 1, the value is actually a de-normal and therefore may have already suffered precision loss. This situation is beyond the scope of this article). Taking the log of this representation of a
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
number and simplifying results in the following: :\log_2(2^\,\cdot\,M) = (-1)^s\,\cdot\,E\,\cdot\,\log_2(2)\,+\,\log_2(M) = \pm\,E\,+\,\log_2(M) This result demonstrates that when taking base 2 logarithm of a number, the sign of the exponent of the original value becomes the sign of the logarithm, the exponent of the original value becomes the integer part of the significand of the logarithm, and the significand of the original value is transformed into the fractional part of the significand of the logarithm. Because is an integer in the range 0 to 1023, up to 10 bits to the left of the radix point are needed to represent the integer part of the logarithm. Because falls in the range , the value of will fall in the range so at least 52 bits are needed to the right of the radix point to represent the fractional part of the logarithm. Combining 10 bits to the left of the radix point with 52 bits to the right of the radix point means that the significand part of the logarithm must be computed to at least 62 bits of precision. In practice values of less than \sqrt require 53 bits to the right of the radix point and values of less than \sqrt /math> require 54 bits to the right of the radix point to avoid precision loss. Balancing this requirement for added precision to the right of the radix point, exponents less than 512 only require 9 bits to the left of the radix point and exponents less than 256 require only 8 bits to the left of the radix point. The final part of the
exponentiation Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to ...
calculation is computing . The "intermediate result" consists of an integer part "" added to a fractional part "". If the intermediate result is negative then a slight adjustment is needed to get a positive fractional part because both "" and "" are negative numbers. For positive intermediate results: :2^ = 2^ = 2^I\,2^F For negative intermediate results: :2^ = 2^ = 2^ = 2^ = 2^\,2^ Thus the integer part of the intermediate result ("" or "") plus a bias becomes the exponent of the final result and transformed positive fractional part of the intermediate result: or becomes the significand of the final result. In order to supply 52 bits of precision to the final result, the positive fractional part must be maintained to at least 52 bits. In conclusion, the exact number of bits of precision needed in the significand of the intermediate result is somewhat data dependent but 64 bits is sufficient to avoid precision loss in the vast majority of
exponentiation Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to ...
computations involving
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
numbers. The number of bits needed for the exponent of the extended precision format follows from the requirement that the product of two
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
numbers should not overflow when computed using the extended format. The largest possible exponent of a
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
value is 1023 so the exponent of the largest possible product of two
double-precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
numbers is 2047 (an 11-bit value). Adding in a bias to account for negative exponents means that the exponent field must be at least 12 bits wide. Combining these requirements: 1 bit for the sign, 12 bits for the biased exponent, and 64 bits for the significand means that the extended precision format would need at least 77 bits. Engineering considerations resulted in the final definition of the 80-bit format (in particular the IEEE 754 standard requires the exponent range of an extended precision format to match that of the next largest,
quad Quad as a word or prefix usually means 'four'. It may refer to: Government * Quadrilateral Security Dialogue, a strategic security dialogue between Australia, India, Japan, and the United States * Quadrilateral group, an informal group which inc ...
, precision format which is 15 bits). Another example of calculations that benefit from extended precision arithmetic are
iterative refinement Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations. When solving a linear system \,\mathrm \, \mathbf = \mathbf \,, due to the compounded acc ...
schemes, used to indirectly clean out errors accumulated in the direct solution during the typically very large number of calculations made for numerical linear algebra.


Language support

* Some C/ C++ implementations (e.g.,
GNU Compiler Collection The GNU Compiler Collection (GCC) is an optimizing compiler produced by the GNU Project supporting various programming languages, hardware architectures and operating systems. The Free Software Foundation (FSF) distributes GCC as free softwar ...
(GCC), Clang, Intel C++) implement long double using 80-bit floating-point numbers on x86 systems. However, this is implementation-defined behavior and is not required, but allowed by the standard, as specified for IEEE 754 hardware in the C99 standard "Annex F IEC 60559 floating-point arithmetic". GCC also provides __float80 and __float128 types. * Some
Common Lisp Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ''ANSI INCITS 226-1994 (S20018)'' (formerly ''X3.226-1994 (R1999)''). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived fr ...
implementations (e.g.
CMU Common Lisp CMUCL is a free Common Lisp implementation, originally developed at Carnegie Mellon University. CMUCL runs on most Unix-like platforms, including Linux and BSD; there is an experimental Windows port as well. Steel Bank Common Lisp is derived f ...
, Embeddable Common Lisp) implement long-float using 80-bit floating-point numbers on x86 systems. * D programming language implements real using largest floating-point size implemented in hardware, 80 bits for x86 CPUs or double precision, whichever is larger. * Object Pascal (
Delphi Delphi (; ), in legend previously called Pytho (Πυθώ), in ancient times was a sacred precinct that served as the seat of Pythia, the major oracle who was consulted about important decisions throughout the ancient classical world. The orac ...
) has in addition to Single (32-bit) and Double (64-bit), an Extended (80-bit for traditional 32-bit targets, thoug
padding and platform can change size
type. * The Racket run-time system provides the 80-bit extflonum datatype on x86 systems. * The Swift standard library provides the Float80 datatype. * The PowerBASIC BASIC compiler provides EXT or EXTENDED 10 byte Extended-precision floating-point data type. * Zig provides a f80 type since version 0.10.0


See also

* GNU MPFR – the GNU "Multiple Precision Floating-Point Reliably" library for C * IBM hexadecimal floating-point *
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found ...
* long double *
x87 x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that worked in tandem with corresponding x86 CPUs. These ...


Footnotes


References

{{DEFAULTSORT:Extended Precision Computer arithmetic Floating point types