HOME

TheInfoList



OR:

In
multivariable calculus Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one Variable (mathematics), variable to calculus with Function of several real variables, functions of several variables: the Differential calculus, di ...
, an iterated limit is a
limit of a sequence As the positive integer n becomes larger and larger, the value n\cdot \sin\left(\tfrac1\right) becomes arbitrarily close to 1. We say that "the limit of the sequence n\cdot \sin\left(\tfrac1\right) equals 1." In mathematics, the limi ...
or a
limit of a function Although the function (sin ''x'')/''x'' is not defined at zero, as ''x'' becomes closer and closer to zero, (sin ''x'')/''x'' becomes arbitrarily close to 1. In other words, the limit of (sin ''x'')/''x'', as ''x'' approaches z ...
in the form : \lim_ \lim_ a_ = \lim_ \left( \lim_ a_ \right), : \lim_ \lim_ f(x, y) = \lim_ \left( \lim_ f(x, y) \right), or other similar forms. An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number.


Types of iterated limits

This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables.


Iterated limit of sequence

For each n, m \in \mathbf, let a_ \in \mathbf be a real double sequence. Then there are two forms of iterated limits, namely : \lim_ \lim_ a_ \qquad \text \qquad \lim_ \lim_ a_. For example, let :a_ = \frac. Then : \lim_ \lim_ a_ = \lim_ 1 = 1, and : \lim_ \lim_ a_ = \lim_ 0 = 0.


Iterated limit of function

Let f: X\times Y \to \mathbf. Then there are also two forms of iterated limits, namely : \lim_ \lim_ f(x, y) \qquad \text \qquad \lim_ \lim_ f(x, y). For example, let f : \mathbf^2\setminus\ \to \mathbf such that :f(x,y) = \frac. Then : \lim_ \lim_ \frac = \lim_ 0 = 0, and : \lim_ \lim_ \frac = \lim_ 1 = 1. The limit(s) for ''x'' and/or ''y'' can also be taken at infinity, i.e., : \lim_ \lim_ f(x, y) \qquad \text \qquad \lim_ \lim_ f(x, y).


Iterated limit of sequence of functions

For each n \in \mathbf, let f_n : X \to \mathbf be a sequence of functions. Then there are two forms of iterated limits, namely : \lim_ \lim_ f_n(x) \qquad \text \qquad \lim_ \lim_ f_n(x). For example, let f_n :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
\to \mathbf such that :f_n(x) = x^n. Then : \lim_ \lim_ f_n(x) = \lim_ 1^n = 1, and : \lim_ \lim_ f_n(x) = \lim_ 0 = 0. The limit in ''x'' can also be taken at infinity, i.e., : \lim_ \lim_ f_n(x) \qquad \text \qquad \lim_ \lim_ f_n(x). Note that the limit in ''n'' is taken discretely, while the limit in ''x'' is taken continuously.


Comparison with other limits in multiple variables

This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables.


Limit of sequence

For a double sequence a_ \in \mathbf, there is another definition of
limit Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
, which is commonly referred to as double limit, denote by :L = \lim_ a_, which means that for all \epsilon > 0, there exist N=N(\epsilon) \in \mathbf such that n,m > N implies \left, a_ - L \ < \epsilon. The following theorem states the relationship between double limit and iterated limits. :Theorem 1. If \lim_ a_ exists and equals ''L'', \lim_a_ exists for each large ''m'', and \lim_a_ exists for each large ''n'', then \lim_ \lim_ a_ and \lim_ \lim_ a_ also exist, and they equal ''L'', i.e., ::\lim_ \lim_ a_ = \lim_ \lim_ a_ = \lim_ a_. For example, let :a_ = \frac + \frac. Since \lim_ a_ = 0, \lim_ a_ = \frac, and \lim_ = \frac, we have :\lim_ \lim_ a_ = \lim_ \lim_ a_ = 0 . This theorem requires the single limits \lim_ a_ and \lim_ a_ to converge. This condition cannot be dropped. For example, consider :a_ = (-1)^m\left( \frac + \frac \right). Then we may see that :\lim_ a_ = \lim_ \lim_ a_ = 0, :but \lim_ \lim_ a_ does not exist. This is because \lim_ a_ does not exist in the first place.


Limit of function

For a two-variable function f : X \times Y \to \mathbf, there are two other types of
limits Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
. One is the ordinary limit, denoted by :L = \lim_ f(x, y), which means that for all \epsilon > 0, there exist \delta=\delta(\epsilon) > 0 such that 0 < \sqrt < \delta implies \left, f(x,y) - L \ < \epsilon. For this limit to exist, ''f''(''x'', ''y'') can be made as close to ''L'' as desired along every possible path approaching the point (''a'', ''b''). In this definition, the point (''a'', ''b'') is excluded from the paths. Therefore, the value of ''f'' at the point (''a'', ''b''), even if it is defined, does not affect the limit. The other type is the double limit, denoted by :L = \lim_ f(x,y), which means that for all \epsilon > 0, there exist \delta=\delta(\epsilon) > 0 such that 0 < \left, x - a \ < \delta and 0 < \left, y - b \ < \delta implies \left, f(x,y) - L \ < \epsilon. For this limit to exist, ''f''(''x'', ''y'') can be made as close to ''L'' as desired along every possible path approaching the point (''a'', ''b''), except the lines ''x''=''a'' and ''y''=''b''. In other words, the value of ''f'' along the lines ''x''=''a'' and ''y''=''b'' does not affect the limit. This is different from the ordinary limit where only the point (''a'', ''b'') is excluded. In this sense, ordinary limit is a stronger notion than double limit: :Theorem 2. If \lim_ f(x,y) exists and equals ''L'', then\lim_ f(x, y) exists and equals ''L'', i.e., ::\lim_ f(x, y) = \lim_ f(x,y). Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in ''x''-direction first, and then in ''y''-direction (or in reversed order). The following theorem states the relationship between double limit and iterated limits: :Theorem 3. If \lim_ f(x, y) exists and equals ''L'', \lim_ f(x,y) exists for each ''y'' near ''b'', and \lim_ f(x,y) exists for each ''x'' near ''a'', then \lim_ \lim_ f(x, y) and \lim_ \lim_ f(x, y) also exist, and they equal ''L'', i.e., ::\lim_ \lim_ f(x, y) = \lim_ \lim_ f(x, y) = \lim_ f(x, y). For example, let :f(x,y) = \begin 1 \quad \text \quad xy \ne 0 \\ 0 \quad \text \quad xy = 0 \end. Since \lim_ f(x, y) = 1, \lim_ f(x, y) = \begin 1 \quad \text \quad y \ne 0 \\ 0 \quad \text \quad y = 0 \end and \lim_ f(x, y) = \begin 1 \quad \text \quad x \ne 0 \\ 0 \quad \text \quad x = 0 \end, we have :\lim_ \lim_ f(x, y) = \lim_ \lim_ f(x, y) = 1. (Note that in this example, \lim_ f(x,y) does not exist.) This theorem requires the single limits \lim_ f(x, y) and \lim_ f(x, y) to exist. This condition cannot be dropped. For example, consider :f(x, y) = x \sin \left( \frac \right). Then we may see that :\lim_ f(x, y) = \lim_ \lim_ f(x,y) = 0, :but \lim_ \lim_ f(x,y) does not exist. This is because \lim_ f(x,y) does not exist for ''x'' near 0 in the first place. Combining Theorem 2 and 3, we have the following corollary: :Corollary 3.1. If \lim_ f(x, y) exists and equals ''L'', \lim_ f(x,y) exists for each ''y'' near ''b'', and \lim_ f(x,y) exists for each ''x'' near ''a'', then \lim_ \lim_ f(x, y) and \lim_ \lim_ f(x, y) also exist, and they equal ''L'', i.e., ::\lim_ \lim_ f(x, y) = \lim_ \lim_ f(x, y) = \lim_ f(x, y).


Limit at infinity of function

For a two-variable function f : X \times Y \to \mathbf, we may also define the double limit at infinity :L = \lim_ f(x,y), which means that for all \epsilon > 0, there exist M = M(\epsilon) > 0 such that x > M and y > M implies \left, f(x,y) - L \ < \epsilon. Similar definitions may be given for limits at negative infinity. The following theorem states the relationship between double limit at infinity and iterated limits at infinity: :Theorem 4. If \lim_ f(x, y) exists and equals ''L'', \lim_ f(x,y) exists for each large ''y'', and \lim_ f(x,y) exists for each large ''x'', then \lim_ \lim_ f(x, y) and \lim_ \lim_ f(x, y) also exist, and they equal ''L'', i.e., ::\lim_ \lim_ f(x, y) = \lim_ \lim_ f(x, y) = \lim_ f(x, y). For example, let :f(x,y) = \frac. Since \lim_(x,y) = 0, \lim_f(x, y) = \frac and \lim_ f(x, y) = 0, we have :\lim_ \lim_ f(x,y) = \lim_ \lim_ f(x,y) = 0. Again, this theorem requires the single limits \lim_ f(x, y) and \lim_ f(x, y) to exist. This condition cannot be dropped. For example, consider :f(x, y) =\frac. Then we may see that :\lim_ f(x, y) = \lim_ \lim_ f(x,y) = 0, :but \lim_ \lim_ f(x,y) does not exist. This is because \lim_ f(x,y) does not exist for fixed ''y'' in the first place.


Invalid converses of the theorems

The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is :f(x,y) = \frac near the point (0, 0). On one hand, : \lim_ \lim_ f(x,y) = \lim_ \lim_ f(x,y) = 0. On the other hand, the double limit \lim_ f(x, y) does not exist. This can be seen by taking the limit along the path (''x'', ''y'') = (''t'', ''t'') → (0,0), which gives : \lim_ f(t,t) = \lim_ \frac = \frac, and along the path (''x'', ''y'') = (''t'', ''t''2) → (0,0), which gives : \lim_ f(t,t^2) = \lim_ \frac = 0.


Moore-Osgood theorem for interchanging limits

In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem. The essence of the interchangeability depends on
uniform convergence In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions (f_n) converges uniformly to a limiting function f on a set E if, given any arbitrarily ...
.


Interchanging limits of sequences

The following theorem allows us to interchange two limits of sequences. :Theorem 5. If \lim_ a_ = b_m uniformly (in ''m''), and \lim_ a_ = c_n for each large ''n'', then both \lim_ b_m and \lim_ c_n exists and are equal to the double limit, i.e., ::\lim_ \lim_ a_ = \lim_ \lim_ a_ = \lim_ a_. :''Proof''. By the uniform convergence, for any \epsilon > 0 there exist N_1(\epsilon)\in\mathbf such that for all m \in \mathbf, n, k > N_1 implies \left, a_ - a_ \ < \frac. :As m \to \infty, we have \left, c_ - c_ \ < \frac, which means that c_n is a
Cauchy sequence In mathematics, a Cauchy sequence (; ), named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all but a finite numbe ...
which converges to a limit L. In addition, as k \to \infty, we have \left, c_n - L\ < \frac. :On the other hand, if we take k \to \infty first, we have \left, a_ - b_m \ < \frac. :By the pointwise convergence, for any \epsilon > 0 and n > N_1, there exist N_2(\epsilon, n) \in \mathbf such that m > N-2 implies \left, a_ - c_n \ < \frac. :Then for that fixed n, m > N_2 implies \left, b_m - L \ \le \left, b_m - a_ \ + \left, a_ - c_n \ + \left, c_n - L \ \le \epsilon. :This proves that \lim_b_m = L = \lim_c_n. :Also, by taking N = \max\, we see that this limit also equals \lim_ a_. A corollary is about the interchangeability of
infinite sum In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, math ...
. :Corollary 5.1. If \sum^\infty_ a_ converges uniformly (in ''m''), and \sum^\infty_ a_ converges for each large ''n'', then \sum^\infty_ \sum^\infty_ a_ = \sum^\infty_ \sum^\infty_ a_. :''Proof''. Direct application of Theorem 5 on S_ = \sum_^k \sum_^\ell a_.


Interchanging limits of functions

Similar results hold for multivariable functions. :Theorem 6. If \lim_ f(x,y) = g(y) uniformly (in ''y'') on Y \setminus\, and \lim_ f(x,y) = h(x) for each ''x'' near ''a'', then both \lim_ g(y) and \lim_ h(x) exists and are equal to the double limit, i.e., ::\lim_ \lim_ f(x,y) = \lim_ \lim_ f(x,y) = \lim_ f(x,y). :The ''a'' and ''b'' here can possibly be infinity. :''Proof''. By the existence uniform limit, for any \epsilon > 0 there exist \delta_1(\epsilon) > 0 such that for all y \in Y \setminus \, 0 <\left, x - a \ < \delta_1 and 0 <\left, w - a \ < \delta_1 implies \left, f(x,y) - f(w,y) \ < \frac. :As y \to b, we have \left, h(x) - h(w) \ < \frac. By
Cauchy criterion The Cauchy convergence test is a method used to test infinite series for convergence. It relies on bounding sums of terms in the series. This convergence criterion is named after Augustin-Louis Cauchy who published it in his textbook Cours d'Analy ...
, \lim_h(x) exists and equals a number L. In addition, as w \to a, we have \left, h(x) - L\ < \frac. :On the other hand, if we take w \to a first, we have \left, f(x,y) - g(y) \ < \frac. :By the existence of pointwise limit, for any \epsilon > 0 and x near a, there exist \delta_2(\epsilon, x) > 0 such that 0 < \left, y - b \ < \delta_2 implies \left, f(x,y) - h(x) \ < \frac. :Then for that fixed x, 0 < \left, y - b \ < \delta_2 implies \left, g(y) - L \ \le \left, g(y) - f(x,y) \ + \left, f(x,y) - h(x) \ + \left, h(x) - L \ \le \epsilon. :This proves that \lim_g(y) = L = \lim_h(x). :Also, by taking \delta = \min\, we see that this limit also equals \lim_ f(x,y). Note that this theorem does not imply the existence of \lim_ f(x,y). A counter-example is f(x,y) = \begin 1 \quad \text \quad xy \ne 0 \\ 0 \quad \text \quad xy = 0 \end near (0,0).


Interchanging limits of sequences of functions

An important variation of Moore-Osgood theorem is specifically for sequences of functions. :Theorem 7. If \lim_ f_n(x) = f(x) uniformly (in ''x'') on X\setminus\, and \lim_ f_n(x) = L_n for each large ''n'', then both \lim_ f(x) and \lim_ L_n exists and are equal, i.e., ::\lim_ \lim_ f_n(x) = \lim_ \lim_ f_n(x) . :The ''a'' here can possibly be infinity. :''Proof''. By the uniform convergence, for any \epsilon > 0 there exist N(\epsilon)\in\mathbf such that for all x \in D\setminus\, n, m > N implies \left, f_n(x) - f_m(x) \ < \frac. :As x \to a, we have \left, L_n - L_m \ < \frac, which means that L_n is a
Cauchy sequence In mathematics, a Cauchy sequence (; ), named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all but a finite numbe ...
which converges to a limit L. In addition, as m \to \infty, we have \left, L_n - L\ < \frac. :On the other hand, if we take m \to \infty first, we have \left, f_n(x) - f(x) \ < \frac. :By the existence of pointwise limit, for any \epsilon > 0 and n > N, there exist \delta(\epsilon, n) > 0 such that 0 < \left, x - a \ < \delta implies \left, f_n(x) - L_n \ < \frac. :Then for that fixed n, 0 < \left, x - a \ < \delta implies \left, f(x) - L \ \le \left, f(x) - f_n(x) \ + \left, f_n(x) - L_n \ + \left, L_n - L \ \le \epsilon. :This proves that \lim_f(x) = L = \lim_L_n. A corollary is the continuity theorem for uniform convergence as follows: :Corollary 7.1. If \lim_ f_n(x) = f(x) uniformly (in ''x'') on X, and f_n(x) are
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
at x=a \in X, then f(x) is also continuous at x=a. :In other words, the uniform limit of continuous functions is continuous. :''Proof''. By Theorem 7, \lim_f(x) = \lim_ \lim_ f_n(x) = \lim_ \lim_ f_n(x) = \lim_ f_n(a) = f(a) . Another corollary is about the interchangeability of limit and
infinite sum In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, math ...
. :Corollary 7.2. If \sum^\infty_ f_n(x) converges uniformly (in ''x'') on X \setminus \, and \lim_ f_n(x) exists for each large ''n'', then \lim_ \sum^\infty_ f_n(x) = \sum^\infty_ \lim_ f_n(x). : ''Proof''. Direct application of Theorem 7 on S_k(x) = \sum_^k f_n(x) near x = a.


Applications


Sum of infinite entries in a matrix

Consider a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
of infinite entries :\begin 1 & -1 & 0 & 0 & \cdots \\ 0 & 1 & -1 & 0 & \cdots \\ 0 & 0 & 1 & -1 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end. Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0. The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let S_ be the sum of entries up to entries (''n'', ''m''). Then we have \lim_ \lim_ S_ = 1, but \lim_ \lim_ S_ = 0. In this case, the double limit \lim_ S_ does not exist, and thus this problem is not well-defined.


Integration over unbounded interval

By the integration theorem for
uniform convergence In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions (f_n) converges uniformly to a limiting function f on a set E if, given any arbitrarily ...
, once we have \lim_ f_n(x) converges uniformly on X, the limit in ''n'' and an integration over a bounded interval
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
\subseteq X can be interchanged: : \lim_ \int_a^b f_n(x) \mathrmx = \int_a^b \lim_ f_n(x) \mathrmx. However, such a property may fail for an
improper integral In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number or positive or negative infinity; or in some instances as both endpoin ...
over an unbounded interval [a, \infty) \subseteq X. In this case, one may rely on the Moore-Osgood theorem. Consider L = \int_0^\infty \frac \mathrmx = \lim_ \int_0^b\frac \mathrmx as an example. We first expand the integrand as \frac = \frac = \sum_^\infty x^2 e^ for x \in [0, \infty). (Here ''x''=0 is a limiting case.) One can prove by calculus that for x \in [0, \infty) and k \ge 1, we have x^2 e^ \le \frac. By Weierstrass M-test, \sum_^\infty x^2 e^ converges uniformly on [0, \infty). Then by the integration theorem for uniform convergence, L = \lim_ \int_0^b \sum_^\infty x^2 e^ \mathrmx = \lim_ \sum_^\infty \int_0^b x^2 e^ \mathrmx. To further interchange the limit \lim_ with the infinite summation \sum_^\infty, the Moore-Osgood theorem requires the infinite series to be uniformly convergent. Note that \int_0^b x^2 e^\mathrmx \le \int_0^\infty x^2 e^ \mathrmx = \frac. Again, by Weierstrass M-test, \sum_^\infty \int_0^b x^2 e^ converges uniformly on [0, \infty). Then by the Moore-Osgood theorem, L = \lim_ \sum_^\infty \int_0^b x^2 e^ = \sum_^\infty \lim_ \int_0^b x^2 e^ = \sum_^\infty \frac{k^3} = 2 \zeta(3). (Here is the Riemann zeta function.)


See also

* Limit of a sequence * Limit of a function * Uniform convergence * Interchange of limiting operations


Notes

Limits (mathematics)