HOME

TheInfoList



OR:

In
number theory Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example ...
, the Erdős–Moser equation is :1^k+2^k+\cdots+(m-1)^k=m^k, where and are restricted to the positive
integers An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
—that is, it is considered as a
Diophantine equation ''Diophantine'' means pertaining to the ancient Greek mathematician Diophantus. A number of concepts bear this name: *Diophantine approximation In number theory, the study of Diophantine approximation deals with the approximation of real n ...
. The only known solution is , and
Paul Erdős Paul Erdős ( ; 26March 191320September 1996) was a Hungarian mathematician. He was one of the most prolific mathematicians and producers of mathematical conjectures of the 20th century. pursued and proposed problems in discrete mathematics, g ...
conjectured that no further solutions exist. Any further solutions must have . Throughout this article, refers exclusively to
prime numbers A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
.


Constraints on solutions

In 1953, Leo Moser proved that, in any further solutions, # is even, # implies , # implies , # implies , # is squarefree, # is either squarefree or 4 times an odd squarefree number, # is squarefree, # is squarefree, # \sum_ \frac \equiv -1 \pmod, # \sum_ \frac \equiv -2 \pmod \quad \textm+1\text, # \sum_ \frac \equiv -2 \pmod, # \sum_ \frac \equiv -4 \pmod, and # . In 1966, it was additionally shown that # , and # cannot be prime. In 1994, it was shown that # divides , # , where is the 2-adic valuation of ; equivalently, , # for any odd prime divding , we have , # any prime factor of must be irregular and > 10000. In 1999, Moser's method was refined to show that . In 2002, it was shown that must be a multiple of , where the symbol indicates the
primorial In mathematics, and more particularly in number theory, primorial, denoted by "", is a function from natural numbers to natural numbers similar to the factorial function, but rather than successively multiplying positive integers, the function ...
; that is, is the product of all prime numbers . This number exceeds . In 2009, it was shown that must be a convergent of ; in what the authors of that paper call "one of very few instances where a large scale computation of a numerical constant has an application", it was then determined that . In 2010, it was shown that # and , and # has at least 4,990,906 prime factors, none of which are repeated. The number 4,990,906 arises from the fact that where is the th prime number.


Moser's method

First, let be a prime factor of . Leo Moser showed that this implies that divides and that which upon multiplying by yields : p + m - 1 \equiv 0 \pmod. This in turn implies that must be squarefree. Furthermore, since
nontrivial In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or a particularly simple object possessing a given structure (e.g., group (mathematics), group, topological space). The n ...
solutions have and since all squarefree numbers in this range must have at least one odd prime factor, the assumption that divides implies that must be even. One congruence of the form () exists for each prime factor of . Multiplying all of them together yields : \prod_ \left( 1 + \frac \right) \equiv 0 \pmod. Expanding out the product yields : 1 + \sum_ \frac + (\text) \equiv 0 \pmod, where the higher-order terms are products of multiple factors of the form , with different values of in each factor. These terms are all divisible by , so they all drop out of the congruence, yielding : 1 + \sum_ \frac \equiv 0 \pmod. Dividing out the modulus yields Similar reasoning yields the congruences The congruences (), (), (), and () are quite restrictive; for example, the only values of which satisfy () are 3, 7, and 43, and these are ruled out by (). We now split into two cases: either is even, or it is odd. In the case that is even, adding the left-hand sides of the congruences (), (), (), and () must yield an integer, and this integer must be at least 4. Furthermore, the
Euclidean algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is a ...
shows that no prime can divide more than one of the numbers in the set , and that 2 and 3 can divide at most two of these numbers. Letting , we then have Since there are no nontrivial solutions with , the part of the LHS of () outside the sigma cannot exceed ; we therefore have : \sum_ \frac > 3.16. Therefore, if \sum_ \frac < 3.16 , then M > \prod_ p . In Moser's original paper, bounds on the
prime-counting function In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number . It is denoted by (unrelated to the number ). A symmetric variant seen sometimes is , which is equal ...
are used to observe that : \sum_ \frac < 3.16. Therefore, must exceed the product of the first 10,000,000 primes. This in turn implies that in this case. In the case that is odd, we cannot use (), so instead of () we obtain : \frac + \frac + \frac + \sum_ \frac \geq 3 - \frac = 2.666..., where . On the surface, this appears to be a weaker condition than (), but since is odd, the prime 2 cannot appear on the greater side of this inequality, and it turns out to be a stronger restriction on than the other case. Therefore any nontrivial solutions have . In 1999, this method was refined by using computers to replace the prime-counting estimates with exact computations; this yielded the bound .


Bounding the ratio

Let . Then the Erdős–Moser equation becomes .


Method 1: Integral comparisons

By comparing the sum to definite integrals of the function , one can obtain the bounds . The sum is the upper Riemann sum corresponding to the integral \int_0^ x^k \, \mathrmx in which the interval has been partitioned on the integer values of , so we have : S_k(m) > \int_0^ x^k \; \mathrmx. By hypothesis, , so : m^k > \frac, which leads to Similarly, is the lower Riemann sum corresponding to the integral \int_1^m x^k \, \mathrmx in which the interval has been partitioned on the integer values of , so we have : S_k(m) \leq \int_1^m x^k \; \mathrmx. By hypothesis, , so : m^k \leq \frac < \frac, and so Applying this to () yields : \frac < \left(1 + \frac\right)^m = \left(1 + \frac\right)^ \cdot \left(\frac\right) < e \cdot \frac. Computation shows that there are no nontrivial solutions with , so we have : \frac < e \cdot \frac < 3. Combining this with () yields , as desired.


Method 2: Algebraic manipulations

The upper bound can be reduced to using an algebraic method: Let be a positive integer. Then the
binomial theorem In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and a ...
yields : (\ell + 1)^ = \sum_^ \binom \ell^. Summing over yields : \sum_^ (\ell + 1)^ = \sum_^ \left( \sum_^ \binom \ell^ \right). Reindexing on the left and rearranging on the right yields : \sum_^ \ell^ = \sum_^ \binom \sum_^ \ell^ : \sum_^ \ell^ = 1 + \sum_^ \binom S_(m) : S_(m+1) - S_(m) = 1 + (r+1) S_(m) + \sum_^ \binom S_(m) Taking yields : m^ = 1 + (k+1) S_(m) + \sum_^ \binom S_(m). By hypothesis, , so : m^ = 1 + (k+1) m^k + \sum_^ \binom S_(m) Since the RHS is positive, we must therefore have Returning to () and taking yields : m^k = 1 + k \cdot S_(m) + \sum_^k \binom S_(m) : m^k = 1 + \sum_^k \binom S_(m). Substituting this into () to eliminate yields : \left( 1 + \sum_^k \binom S_(m) \right) ( m - (k+1) ) = 1 + \sum_^ \binom S_(m). Reindexing the sum on the right with the substitution yields : \left( 1 + \sum_^k \binom S_(m) \right) ( m - (k+1) ) = 1 + \sum_^k \binom S_(m) : m - (k+1) + ( m - (k+1) ) \sum_^k \binom S_(m) = 1 + \sum_^k \frac \binom S_(m) We already know from () that . This leaves open the possibility that ; however, substituting this into () yields : 0 = \sum_^k \left( \frac - 1 \right) \binom S_(k+2) : 0 = \sum_^k \frac \binom S_(k+2) : 0 = \frac \binom S_(k+2) + \sum_^ \frac \binom S_(k+2) : 0 = 0 + \sum_^ \frac \binom S_(k+2), which is impossible for , since the sum contains only positive terms. Therefore any nontrivial solutions must have ; combining this with () yields : k + 2 < m. We therefore observe that the left-hand side of () is positive, so Since , the sequence \left\_^\infty is decreasing. This and () together imply that its first term (the term with ) must be positive: if it were not, then every term in the sum would be nonpositive, and therefore so would the sum itself. Thus, : 0 < \frac - m + (k+1), which yields : m < \frac \cdot (k+1) and therefore : \frac < \frac, as desired.


Continued fractions

Any potential solutions to the equation must arise from the
continued fraction A continued fraction is a mathematical expression that can be written as a fraction with a denominator that is a sum that contains another simple or continued fraction. Depending on whether this iteration terminates with a simple fraction or not, ...
of the
natural logarithm of 2 In mathematics, the natural logarithm of 2 is the unique real number argument such that the exponential function equals two. It appears frequently in various formulas and is also given by the alternating harmonic series. The decimal value of th ...
: specifically, must be a convergent of that number. By expanding the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
of about , one finds : (1 - y)^k = e^ \left( 1 - \frac y^2 - \frac y^3 + \frac y^4 + \frac y^5 + O(y^6) \right) \quad \text y \rightarrow 0. More elaborate analysis sharpens this to for and . The Erdős–Moser equation is equivalent to : 1 = \sum_^ \left( 1 - \frac \right)^k. Applying () to each term in this sum yields : \begin T_0 - \frac T_2 - \frac T_3 + \frac T_4 + \frac T_5 - \frac T_6 \qquad \qquad \\ < \sum_^ \left(1 - \frac \right)^k < T_0 - \frac T_2 - \frac T_3 + \frac T_4 + \frac T_5, \end where T_n = \sum_^ j^n z^j and . Further manipulation eventually yields We already know that is bounded as ; making the
ansatz In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural ansatzes or, from German, ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be ...
, and therefore , and substituting it into () yields : 1 - \frac = O\left(\frac\right) \quad \text m \rightarrow \infty; therefore . We therefore have and so : \frac = e^ = 2 + \frac + \frac + O\left(\frac\right) \quad \text m \rightarrow \infty. Substituting these formulas into () yields and . Putting these into () yields : \frac = \ln(2) \left(1 - \frac - \frac + O\left(\frac\right) \right) \quad \text m \rightarrow \infty. The term must be bounded effectively. To that end, we define the function : F(x,\lambda) = \left.\left( 1 - \frac + \frac \frac + \frac \frac - \frac \frac \right) \right\vert_. The inequality () then takes the form and we further have : \begin F(x, \ln(2) (1 - \tfracx \, \phantom)) &> \phantom0.005\phantomx^2 - 100x^3 \quad \text \\ F(x, \ln(2) (1 - \tfracx - 0.004x^2)) &< -0.00015x^2 + 100x^3 \end for . Therefore : \begin F \left( \frac , \ln(2) \left( 1 - \frac \, \phantom \right) \right) &> \phantom\frac \quad \text m > 2202 \cdot 10^4 \quad \text \\ F \left( \frac , \ln(2) \left( 1 - \frac - \frac \right) \right) &< - \frac \quad \text m > \phantom734 \cdot 10^6. \end Comparing these with () then shows that, for , we have : \ln(2) \left( 1 - \frac - \frac \right) < \frac < \ln(2) \left( 1 - \frac \right), and therefore : 0 < \ln(2) - \frac < \frac. Recalling that Moser showed that indeed , and then invoking Legendre's theorem on continued fractions, finally proves that must be a convergent to . Leveraging this result, 31 billion decimal digits of can be used to exclude any nontrivial solutions below .


See also

* List of conjectures by Paul Erdős * List of things named after Paul Erdős *
List of unsolved problems in mathematics Many mathematical problems have been stated but not yet solved. These problems come from many areas of mathematics, such as theoretical physics, computer science, algebra, Mathematical analysis, analysis, combinatorics, Algebraic geometry, alge ...
*
Sums of powers In mathematics and statistics, sums of powers occur in a number of contexts: * Sums of squares arise in many contexts. For example, in geometry, the Pythagorean theorem involves the sum of two squares; in number theory, there are Legendre's three ...
*
Faulhaber's formula In mathematics, Faulhaber's formula, named after the early 17th century mathematician Johann Faulhaber, expresses the sum of the ''p''-th powers of the first ''n'' positive integers \sum_^ k^p = 1^p + 2^p + 3^p + \cdots + n^p as a polynomial in&n ...


References

{{DEFAULTSORT:Erdos-Moser equation Diophantine equations Moser equation Unsolved problems in number theory