HOME

TheInfoList



OR:

A division algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
which, given two integers N and D, computes their
quotient In arithmetic, a quotient (from lat, quotiens 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics, and is commonly referred to as the integer part of a ...
and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Division algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division include restoring, non-performing restoring, non-restoring, and SRT division. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration.
Newton–Raphson In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-va ...
and
Goldschmidt Goldschmidt is a German surname meaning "Goldsmith". It may refer to: * Adalbert von Goldschmidt (1848-1906), composer * Adolph Goldschmidt (1863–1944), art historian * Adolphe Goldschmidt (1838–1918), German-British banker * Berthold Goldsch ...
algorithms fall into this category. Variants of these algorithms allow using fast multiplication algorithms. It results that, for large integers, the computer time needed for a division is the same, up to a constant factor, as the time needed for a multiplication, whichever multiplication algorithm is used. Discussion will refer to the form N/D = (Q, R), where * ''N'' = numerator (dividend) * ''D'' = denominator (divisor) is the input, and * ''Q'' = quotient * ''R'' = remainder is the output.


Division by repeated subtraction

The simplest division algorithm, historically incorporated into a
greatest common divisor In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers ''x'', ''y'', the greatest common divisor of ''x'' and ''y'' is ...
algorithm presented in Euclid's ''Elements'', Book VII, Proposition 1, finds the remainder given two positive integers using only subtractions and comparisons: R := N Q := 0 while R ≥ D do R := R − D Q := Q + 1 end return (Q,R) The proof that the quotient and remainder exist and are unique (described at Euclidean division) gives rise to a complete division algorithm, applicable to both negative and positive numbers, using additions, subtractions, and comparisons: function divide(N, D) if D = 0 then error(DivisionByZero) end if D < 0 then (Q, R) := divide(N, −D); return (−Q, R) end if N < 0 then (Q,R) := divide(−N, D) if R = 0 then return (−Q, 0) else return (−Q − 1, D − R) end end -- At this point, N ≥ 0 and D > 0 return divide_unsigned(N, D) end function divide_unsigned(N, D) Q := 0; R := N while R ≥ D do Q := Q + 1 R := R − D end return (Q, R) end This procedure always produces R ≥ 0. Although very simple, it takes Ω(Q) steps, and so is exponentially slower than even slow division algorithms like long division. It is useful if Q is known to be small (being an
output-sensitive algorithm In computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of, or in addition to, the size of the input. For certain problems where the output size varies widely, for example fr ...
), and can serve as an executable specification.


Long division

Long division is the standard algorithm used for pen-and-paper division of multi-digit numbers expressed in decimal notation. It shifts gradually from the left to the right end of the dividend, subtracting the largest possible multiple of the divisor (at the digit level) at each stage; the multiples then become the digits of the quotient, and the final difference is then the remainder. When used with a binary radix, this method forms the basis for the (unsigned) integer division with remainder algorithm below. Short division is an abbreviated form of long division suitable for one-digit divisors.
Chunking Chunking may mean: * Chunking (division), an approach for doing simple mathematical division sums, by repeated subtraction * Chunking (computational linguistics), a method for parsing natural language sentences into partial syntactic structures * ...
also known as the partial quotients method or the hangman method is a less-efficient form of long division which may be easier to understand. By allowing one to subtract more multiples than what one currently has at each stage, a more freeform variant of long division can be developed as well.


Integer division (unsigned) with remainder

The following algorithm, the binary version of the famous long division, will divide ''N'' by ''D'', placing the quotient in ''Q'' and the remainder in ''R''. In the following pseudo-code, all values are treated as unsigned integers. if D = 0 then error(DivisionByZeroException) end Q := 0 -- Initialize quotient and remainder to zero R := 0 for i := n − 1 .. 0 do -- Where n is number of bits in N R := R << 1 -- Left-shift R by 1 bit R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator if R ≥ D then R := R − D Q(i) := 1 end end


Example

If we take N=11002 (1210) and D=1002 (410) ''Step 1'': Set R=0 and Q=0
''Step 2'': Take i=3 (one less than the number of bits in N)
''Step 3'': R=00 (left shifted by 1)
''Step 4'': R=01 (setting R(0) to N(i))
''Step 5'': R < D, so skip statement ''Step 2'': Set i=2
''Step 3'': R=010
''Step 4'': R=011
''Step 5'': R < D, statement skipped ''Step 2'': Set i=1
''Step 3'': R=0110
''Step 4'': R=0110
''Step 5'': R>=D, statement entered
''Step 5b'': R=10 (R−D)
''Step 5c'': Q=10 (setting Q(i) to 1) ''Step 2'': Set i=0
''Step 3'': R=100
''Step 4'': R=100
''Step 5'': R>=D, statement entered
''Step 5b'': R=0 (R−D)
''Step 5c'': Q=11 (setting Q(i) to 1) end
Q=112 (310) and R=0.


Slow division methods

Slow division methods are all based on a standard recurrence equation :R_ = B \times R_j - q_\times D , where: * ''R''''j'' is the ''j''-th partial remainder of the division * ''B'' is the
radix In a positional numeral system, the radix or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal/denary system (the most common system in use today) the radix (base number) is ...
(base, usually 2 internally in computers and calculators) * ''q'' ''n'' − (''j'' + 1) is the digit of the quotient in position ''n''−(''j''+1), where the digit positions are numbered from least-significant 0 to most significant ''n''−1 * ''n'' is number of digits in the quotient * ''D'' is the divisor


Restoring division

Restoring division operates on fixed-point fractional numbers and depends on the assumption 0 < ''D'' < ''N''. The quotient digits ''q'' are formed from the digit set . The basic algorithm for binary (radix 2) restoring division is: R := N D := D << n -- R and D need twice the word width of N and Q for i := n − 1 .. 0 do -- For example 31..0 for 32 bits R := 2 * R − D -- Trial subtraction from shifted value (multiplication by 2 is a shift in binary representation) if R >= 0 then q(i) := 1 -- Result-bit 1 else q(i) := 0 -- Result-bit 0 R := R + D -- New partial remainder is (restored) shifted value end end -- Where: N = numerator, D = denominator, n = #bits, R = partial remainder, q(i) = bit #i of quotient Non-performing restoring division is similar to restoring division except that the value of 2R is saved, so ''D'' does not need to be added back in for the case of R < 0.


Non-restoring division

Non-restoring division uses the digit set for the quotient digits instead of . The algorithm is more complex, but has the advantage when implemented in hardware that there is only one decision and addition/subtraction per quotient bit; there is no restoring step after the subtraction, which potentially cuts down the numbers of operations by up to half and lets it be executed faster. The basic algorithm for binary (radix 2) non-restoring division of non-negative numbers is: R := N D := D << n -- R and D need twice the word width of N and Q for i = n − 1 .. 0 do -- for example 31..0 for 32 bits if R >= 0 then q(i) := +1 R := 2 * R − D else q(i) := −1 R := 2 * R + D end if end -- Note: N=numerator, D=denominator, n=#bits, R=partial remainder, q(i)=bit #i of quotient. Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and +1. This form needs to be converted to binary to form the final quotient. Example: If the −1 digits of Q are stored as zeros (0) as is common, then P is Q and computing M is trivial: perform a one's complement (bit by bit complement) on the original Q. Q := Q − bit.bnot(Q) -- Appropriate if −1 digits in Q are represented as zeros as is common. Finally, quotients computed by this algorithm are always odd, and the remainder in R is in the range −D ≤ R < D. For example, 5 / 2 = 3 R −1. To convert to a positive remainder, do a single restoring step ''after'' Q is converted from non-standard form to standard form: if R < 0 then Q := Q − 1 R := R + D -- Needed only if the remainder is of interest. end if The actual remainder is R >> n. (As with restoring division, the low-order bits of R are used up at the same rate as bits of the quotient Q are produced, and it is common to use a single shift register for both.)


SRT division

SRT division is a popular method for division in many
microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circ ...
implementations. The algorithm is named after D.W. Sweeney of IBM, James E. Robertson of
University of Illinois The University of Illinois Urbana-Champaign (U of I, Illinois, University of Illinois, or UIUC) is a public land-grant research university in Illinois in the twin cities of Champaign and Urbana. It is the flagship institution of the Uni ...
, and K. D. Tocher of
Imperial College London Imperial College London (legally Imperial College of Science, Technology and Medicine) is a public research university in London, United Kingdom. Its history began with Prince Albert, consort of Queen Victoria, who developed his vision for a cu ...
. They all developed the algorithm independently at approximately the same time (published in February 1957, September 1958, and January 1958 respectively). SRT division is similar to non-restoring division, but it uses a
lookup table In computer science, a lookup table (LUT) is an array that replaces runtime computation with a simpler array indexing operation. The process is termed as "direct addressing" and LUTs differ from hash tables in a way that, to retrieve a value v w ...
based on the dividend and the divisor to determine each quotient digit. The most significant difference is that a ''redundant representation'' is used for the quotient. For example, when implementing radix-4 SRT division, each quotient digit is chosen from ''five'' possibilities: . Because of this, the choice of a quotient digit need not be perfect; later quotient digits can correct for slight errors. (For example, the quotient digit pairs (0, +2) and (1, −2) are equivalent, since 0×4+2 = 1×4−2.) This tolerance allows quotient digits to be selected using only a few most-significant bits of the dividend and divisor, rather than requiring a full-width subtraction. This simplification in turn allows a radix higher than 2 to be used. Like non-restoring division, the final steps are a final full-width subtraction to resolve the last quotient bit, and conversion of the quotient to standard binary form. The Intel Pentium processor's infamous floating-point division bug was caused by an incorrectly coded lookup table. Five of the 1066 entries had been mistakenly omitted.


Fast division methods


Newton–Raphson division

Newton–Raphson uses
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
to find the
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
of D and multiply that reciprocal by N to find the The steps of Newton–Raphson division are: # Calculate an estimate X_0 for the reciprocal 1/D of the divisor D. # Compute successively more accurate estimates X_1,X_2,\ldots,X_S of the reciprocal. This is where one employs the Newton–Raphson method as such. # Compute the quotient by multiplying the dividend by the reciprocal of the divisor: Q = N X_S. In order to apply Newton's method to find the reciprocal of D, it is necessary to find a function f(X) that has a zero at X=1/D. The obvious such function is f(X)=DX-1, but the Newton–Raphson iteration for this is unhelpful, since it cannot be computed without already knowing the reciprocal of D (moreover it attempts to compute the exact reciprocal in one step, rather than allow for iterative improvements). A function that does work is f(X)=(1/X)-D, for which the Newton–Raphson iteration gives : X_ = X_i - = X_i - = X_i + X_i(1-DX_i) = X_i(2-DX_i), which can be calculated from X_i using only multiplication and subtraction, or using two
fused multiply–add Fuse or FUSE may refer to: Devices * Fuse (electrical), a device used in electrical systems to protect against excessive current ** Fuse (automotive), a class of fuses for vehicles * Fuse (hydraulic), a device used in hydraulic systems to prot ...
s. From a computation point of view, the expressions X_ = X_i + X_i(1-DX_i) and X_ = X_i(2-DX_i) are not equivalent. To obtain a result with a precision of 2''n'' bits while making use of the second expression, one must compute the product between X_i and (2-DX_i) with double the given precision of X_i(''n'' bits). In contrast, the product between X_i and (1-DX_i) need only be computed with a precision of ''n'' bits, because the leading ''n'' bits (after the binary point) of (1-DX_i) are zeros. If the error is defined as \varepsilon_i = 1 - D X_i, then: :\begin \varepsilon_ &= 1 - D X_ \\ &= 1 - D (X_i(2-DX_i)) \\ &= 1 - 2DX_i + D^2X_i^2 \\ &= (1 - DX_i)^2 \\ &= ^2. \\ \end This squaring of the error at each iteration step the so-called quadratic convergence of Newton–Raphson's method has the effect that the number of correct digits in the result roughly ''doubles for every iteration'', a property that becomes extremely valuable when the numbers involved have many digits (e.g. in the large integer domain). But it also means that the initial convergence of the method can be comparatively slow, especially if the initial estimate X_0 is poorly chosen. For the subproblem of choosing an initial estimate X_0, it is convenient to apply a bit-shift to the divisor ''D'' to scale it so that 0.5 ≤ ''D'' ≤ 1; by applying the same bit-shift to the numerator ''N'', one ensures the quotient does not change. Then one could use a linear
approximation An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ' ...
in the form :X_0 = T_1 + T_2 D \approx \frac \, to initialize Newton–Raphson. To minimize the maximum of the absolute value of the error of this approximation on interval .5,1/math>, one should use :X_0 = - D. \, The coefficients of the linear approximation are determined as follows. The absolute value of the error is , \varepsilon_0, = , 1 - D(T_1 + T_2 D), . The minimum of the maximum absolute value of the error is determined by the Chebyshev equioscillation theorem applied to F(D) = 1 - D(T_1 + T_2 D). The local minimum of F(D) occurs when F'(D) = 0, which has solution D = -T_1/(2T_2). The function at that minimum must be of opposite sign as the function at the endpoints, namely, F(1/2) = F(1) = -F(-T_1/(2T_2)). The two equations in the two unknowns have a unique solution T_1 = 48/17 and T_2 = -32/17, and the maximum error is F(1) = 1/17. Using this approximation, the absolute value of the error of the initial value is less than :\vert \varepsilon_0 \vert \leq \approx 0.059 . \, It is possible to generate a polynomial fit of degree larger than 1, computing the coefficients using the Remez algorithm. The trade-off is that the initial guess requires more computational cycles but hopefully in exchange for fewer iterations of Newton–Raphson. Since for this method the convergence is exactly quadratic, it follows that :S = \left \lceil \log_2 \frac \right \rceil \, steps are enough to calculate the value up to P \, binary places. This evaluates to 3 for IEEE
single precision Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. A floatin ...
and 4 for both
double precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. Flo ...
and double extended formats.


Pseudocode

The following computes the quotient of and with a precision of binary places: Express D as M × 2e where 1 ≤ M < 2 (standard floating point representation) D' := D / 2e+1 ''// scale between 0.5 and 1, can be performed with bit shift / exponent subtraction'' N' := N / 2e+1 X := 48/17 − 32/17 × D' ''// precompute constants with same precision as D'' ''// can be precomputed based on fixed P'' X := X + X × (1 - D' × X) end return N' × X For example, for a double-precision floating-point division, this method uses 10 multiplies, 9 adds, and 2 shifts.


Variant Newton–Raphson division

The Newton-Raphson division method can be modified to be slightly faster as follows. After shifting ''N'' and ''D'' so that ''D'' is in .5, 1.0 initialize with : X := \frac + D \cdot \left(\frac + D \cdot \frac\right) . This is the best quadratic fit to 1/''D'' and gives an absolute value of the error less than or equal to 1/99. It is chosen to make the error equal to a re-scaled third order
Chebyshev polynomial The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebys ...
of the first kind. The coefficients should be pre-calculated and hard-coded. Then in the loop, use an iteration which cubes the error. : E := 1 - D \cdot X : Y := X \cdot E : X := X + Y + Y \cdot E . The ''Y''·''E'' term is new. If the loop is performed until X agrees with 1/''D'' on its leading ''P'' bits, then the number of iterations will be no more than : \left \lceil \log_3 \left(\frac \right) \right \rceil which is the number of times 99 must be cubed to get to 2''P''+1. Then : Q:= N \cdot X is the quotient to ''P'' bits. Using higher degree polynomials in either the initialization or the iteration results in a degradation of performance because the extra multiplications required would be better spent on doing more iterations.


Goldschmidt division

Goldschmidt division (after Robert Elliott Goldschmidt) uses an iterative process of repeatedly multiplying both the dividend and divisor by a common factor ''F''''i'', chosen such that the divisor converges to 1. This causes the dividend to converge to the sought quotient ''Q'': :Q = \frac \frac \frac \frac. The steps for Goldschmidt division are: # Generate an estimate for the multiplication factor ''Fi'' . # Multiply the dividend and divisor by ''Fi'' . # If the divisor is sufficiently close to 1, return the dividend, otherwise, loop to step 1. Assuming ''N''/''D'' has been scaled so that 0 < ''D'' < 1, each ''Fi'' is based on ''D'': :F_ = 2 - D_i. Multiplying the dividend and divisor by the factor yields: :\frac = \frac\frac. After a sufficient number ''k'' of iterations Q=N_k. The Goldschmidt method is used in AMD Athlon CPUs and later models. It is also known as Anderson Earle Goldschmidt Powers (AEGP) algorithm and is implemented by various IBM processors. Although it converges at the same rate as a Newton–Raphson implementation, one advantage of the Goldschmidt method is that the multiplications in the numerator and in the denominator can be done in parallel.


Binomial theorem

The Goldschmidt method can be used with factors that allow simplifications by the binomial theorem. Assume N/D has been scaled by a
power of two A power of two is a number of the form where is an integer, that is, the result of exponentiation with number two as the base and integer  as the exponent. In a context where only integers are considered, is restricted to non-negat ...
such that D\in(\tfrac,1]. We choose D = 1-x and F_ = 1+x^. This yields : \frac = \frac = \frac = \cdots = Q' = \frac . After n steps ( x\in ,\tfrac)_),_the_denominator_1-x^_can_be_rounded_to_1_with_a_relative_error : \varepsilon_n_=_\frac__=__x^ which_is_maximum_at_2^_when_x_=_,_thus_providing_a_minimum_precision_of_2^n_binary_digits.


_Large-integer_methods

Methods_designed_for_hardware_implementation_generally_do_not_scale_to_integers_with_thousands_or_millions_of_decimal_digits;_these_frequently_occur,_for_example,_in_Modular_arithmetic.html" "title="relative_error.html" ;"title=",\tfrac) ), the denominator 1-x^ can be rounded to 1 with a relative error">,\tfrac) ), the denominator 1-x^ can be rounded to 1 with a relative error : \varepsilon_n = \frac = x^ which is maximum at 2^ when x = , thus providing a minimum precision of 2^n binary digits.


Large-integer methods

Methods designed for hardware implementation generally do not scale to integers with thousands or millions of decimal digits; these frequently occur, for example, in Modular arithmetic">modular Broadly speaking, modularity is the degree to which a system's components may be separated and recombined, often with the benefit of flexibility and variety in use. The concept of modularity is used primarily to reduce complexity by breaking a s ...
reductions in cryptography. For these large integers, more efficient division algorithms transform the problem to use a small number of multiplications, which can then be done using an asymptotically efficient multiplication algorithm such as the Karatsuba algorithm, Toom–Cook multiplication or the
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ...
. The result is that the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
of the division is of the same order (up to a multiplicative constant) as that of the multiplication. Examples include reduction to multiplication by
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
as described above, as well as the slightly faster Burnikel-Ziegler division, Barrett reduction and
Montgomery reduction In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. ...
algorithms. Newton's method is particularly efficient in scenarios where one must divide by the same divisor many times, since after the initial Newton inversion only one (truncated) multiplication is needed for each division.


Division by a constant

The division by a constant ''D'' is equivalent to the multiplication by its
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
. Since the denominator is constant, so is its reciprocal (1/''D''). Thus it is possible to compute the value of (1/''D'') once at compile time, and at run time perform the multiplication ''N''·(1/''D'') rather than the division ''N/D''. In
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can ...
arithmetic the use of (1/''D'') presents little problem, but in
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
arithmetic the reciprocal will always evaluate to zero (assuming , ''D'', > 1). It is not necessary to use specifically (1/''D''); any value (''X''/''Y'') that reduces to (1/''D'') may be used. For example, for division by 3, the factors 1/3, 2/6, 3/9, or 194/582 could be used. Consequently, if ''Y'' were a power of two the division step would reduce to a fast right bit shift. The effect of calculating ''N''/''D'' as (''N''·''X'')/''Y'' replaces a division with a multiply and a shift. Note that the parentheses are important, as ''N''·(''X''/''Y'') will evaluate to zero. However, unless ''D'' itself is a power of two, there is no ''X'' and ''Y'' that satisfies the conditions above. Fortunately, (''N''·''X'')/''Y'' gives exactly the same result as ''N''/''D'' in integer arithmetic even when (''X''/''Y'') is not exactly equal to 1/''D'', but "close enough" that the error introduced by the approximation is in the bits that are discarded by the shift operation. Barrett reduction uses powers of 2 for the value of ''Y'' to make division by ''Y'' a simple right shift. As a concrete
fixed-point arithmetic In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, represent ...
example, for 32-bit unsigned integers, division by 3 can be replaced with a multiply by , a multiplication by 2863311531 (
hexadecimal In mathematics and computing, the hexadecimal (also base-16 or simply hex) numeral system is a positional numeral system that represents numbers using a radix (base) of 16. Unlike the decimal system representing numbers using 10 symbols, he ...
0xAAAAAAAB) followed by a 33 right bit shift. The value of 2863311531 is calculated as , then rounded up. Likewise, division by 10 can be expressed as a multiplication by 3435973837 (0xCCCCCCCD) followed by division by 235 (or 35 right bit shift).
OEIS The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to the ...
provides sequences of the constants for multiplication as and for the right shift as . For general x-bit unsigned integer division where the divisor D is not a power of 2, the following identity converts the division into two x-bit addition/subtraction, one x-bit by x-bit multiplication (where only the upper half of the result is used) and several shifts, after precomputing k=x+\lceil\log_2\rceil and a=\left\lceil\frac\right\rceil-2^x: \left\lfloor\frac\right\rfloor=\left\lfloor\frac\right\rfloor where b=\left\lfloor\frac\right\rfloor In some cases, division by a constant can be accomplished in even less time by converting the "multiply by a constant" into a series of shifts and adds or subtracts. Of particular interest is division by 10, for which the exact quotient is obtained, with remainder if required.


Rounding error

Round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
can be introduced by division operations due to limited precision.


See also

* Galley division * Multiplication algorithm *
Pentium FDIV bug The Pentium FDIV bug is a hardware bug affecting the floating-point unit (FPU) of the early Intel Pentium processors. Because of the bug, the processor would return incorrect binary floating point results when dividing certain pairs of high- ...


Notes


References


Further reading

* {{DEFAULTSORT:Division (Digital) Binary arithmetic Computer arithmetic
Digital Digital usually refers to something using discrete digits, often binary digits. Technology and computing Hardware *Digital electronics, electronic circuits which operate using digital signals ** Digital camera, which captures and stores digital ...
Articles with example pseudocode Computer arithmetic algorithms