TheInfoList

OR:

Although the function (sin ''x'')/''x'' is not defined at zero, as ''x'' becomes closer and closer to zero, (sin ''x'')/''x'' becomes arbitrarily close to 1. In other words, the limit of (sin ''x'')/''x'', as ''x'' approaches zero, equals 1.
In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the limit of a function is a fundamental concept in
calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arit ...
and
analysis Analysis ( : analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384 ...
concerning the behavior of that
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-orien ...
near a particular
input Input may refer to: Computing * Input (computer science), the act of entering data into a computer or data processing system * Information, any data entered into a computer or data processing system * Input device * Input method * Input port (dis ...
. Formal definitions, first devised in the early 19th century, are given below. Informally, a function ''f'' assigns an
output Output may refer to: * The information produced by a computer, see Input/output * An output state of a system, see state (computer science) * Output (economics), the amount of goods and services produced ** Gross output in economics, the value of ...
''f''(''x'') to every input ''x''. We say that the function has a limit ''L'' at an input ''p,'' if ''f''(''x'') gets closer and closer to ''L'' as ''x'' moves closer and closer to ''p''. More specifically, when ''f'' is applied to any input ''sufficiently'' close to ''p'', the output value is forced ''arbitrarily'' close to ''L''. On the other hand, if some inputs very close to ''p'' are taken to outputs that stay a fixed distance apart, then we say the limit ''does not exist''. The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
: in the calculus of one variable, this is the limiting value of the
slope In mathematics, the slope or gradient of a line is a number that describes both the ''direction'' and the ''steepness'' of the line. Slope is often denoted by the letter ''m''; there is no clear answer to the question why the letter ''m'' is used ...
of
secant line Secant is a term in mathematics derived from the Latin ''secare'' ("to cut"). It may refer to: * a secant line, in geometry * the secant variety, in algebraic geometry * secant (trigonometry) (Latin: secans), the multiplicative inverse (or recip ...
s to the graph of a function.

# History

Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bolzano who, in 1817, introduced the basics of the epsilon-delta technique to define continuous functions. However, his work was not known during his lifetime. In his 1821 book ''
Cours d'analyse ''Cours d'Analyse de l’École Royale Polytechnique; I.re Partie. Analyse algébrique'' is a seminal textbook in infinitesimal calculus published by Augustin-Louis Cauchy in 1821. The article follows the translation by Bradley and Sandifer in de ...
'',
Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He ...
discussed variable quantities,
infinitesimal In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally refe ...
s and limits, and defined continuity of $y=f\left(x\right)$ by saying that an infinitesimal change in ''x'' necessarily produces an infinitesimal change in ''y'', while claims that he used a rigorous epsilon-delta definition in proofs., collected i
Who Gave You the Epsilon?
pp. 5–13. Also available at: http://www.maa.org/pubs/Calc_articles/ma002.pdf
In 1861, Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations lim and lim''x''→''x''0. The modern notation of placing the arrow below the limit symbol is due to Hardy, which is introduced in his book '' A Course of Pure Mathematics'' in 1908.

# Motivation

Imagine a person walking over a landscape represented by the graph of ''y'' = ''f''(''x''). Their horizontal position is measured by the value of ''x'', much like the position given by a map of the land or by a
global positioning system The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite s ...
. Their altitude is given by the coordinate ''y''. They walk toward the horizontal position given by ''x'' = ''p''. As they get closer and closer to it, they notice that their altitude approaches ''L''. If asked about the altitude of ''x'' = ''p'', they would then answer ''L''. What, then, does it mean to say, their altitude is approaching ''L?'' It means that their altitude gets nearer and nearer to ''L''—except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of ''L''. They report back that indeed, they can get within ten vertical meters of ''L'', since they note that when they are within fifty horizontal meters of ''p'', their altitude is ''always'' ten meters or less from ''L''. The accuracy goal is then changed: can they get within one vertical meter? Yes. If they are anywhere within seven horizontal meters of ''p'', their altitude will always remain within one meter from the target ''L''. In summary, to say that the traveler's altitude approaches ''L'' as their horizontal position approaches ''p'', is to say that for every target accuracy goal, however small it may be, there is some neighbourhood of ''p'' whose altitude fulfills that accuracy goal. The initial informal statement can now be explicated: :The limit of a function ''f''(''x'') as ''x'' approaches ''p'' is a number ''L'' with the following property: given any target distance from ''L'', there is a distance from ''p'' within which the values of ''f''(''x'') remain within the target distance. In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a
topological space In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called poi ...
. More specifically, to say that :$\lim_f\left(x\right) = L$, is to say that ''ƒ''(''x'') can be made as close to ''L'' as desired, by making ''x'' close enough, but not equal, to ''p''. The following definitions, known as (''ε'', ''δ'')-definitions, are the generally accepted definitions for the limit of a function in various contexts.

# Functions of a single variable

## (''ε'', ''δ'')-definition of limit

Suppose $f: \R \rightarrow \R$ is a function defined on the
real line In elementary mathematics, a number line is a picture of a graduated straight line that serves as visual representation of the real numbers. Every point of a number line is assumed to correspond to a real number, and every real number to a po ...
, and there are two real numbers ''p'' and ''L''. One would say that the limit of ''f'', as ''x'' approaches ''p'', is ''L'' and written :$\lim_ f\left(x\right) = L$, or alternatively, say ''f''(''x'') tends to ''L'' as ''x'' tends to ''p'', and written: :$f\left(x\right) \to L \;\; \text \;\; x \to p$, if the following property holds: For every real , there exists a real such that for all real ''x'', implies . Or, symbolically: :$\left(\forall \varepsilon > 0 \right) \, \left(\exists \delta > 0\right) \, \left(\forall x \in \R\right) \, \left(0 < , x - p, < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. For example, we may say :$\lim_ 4x + 1 = 9$ because for every real ''ε'' > 0, we can take ''δ'' = ''ε''/4, so that for all real ''x'', if 0 < , ''x'' − ''p'', < ''δ'', then , ''f''(''x'') − ''L'', < ''ε''. A more general definition applies for functions defined on subsets of the real line. Let (''a'', ''b'') be an
open interval In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. For example, the set of numbers satisfying is an interval which contains , , and all numbers in between. Othe ...
in $\R$, and a number ''p'' in (''a'', ''b''). Let $f: S \to \R$ be a
real-valued function In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain. Real-valued functions of a real variable (commonly called ''real f ...
defined on ''S'' — a set contains all of (''a'', ''b''), except possibly at ''p'' itself. It is then said that the limit of ''f'' as ''x'' approaches ''p'' is ''L,'' if: :For every real , there exists a real such that for all , implies that . Or, symbolically: :$\left(\forall \varepsilon > 0 \right) \, \left(\exists \delta > 0\right) \, \left(\forall x \in \left(a, b\right)\right) \, \left(0 < , x - p, < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. For example, we may say :$\lim_ \sqrt = 2$ because for every real ''ε'' > 0, we can take ''δ'' = ''ε'', so that for all real ''x'' ≥ −3, if 0 < , ''x'' − 1, < ''δ'', then , ''f''(''x'') − 2, < ''ε''. In this example, ''S'' = [−3, ∞) contains open intervals around the point 1 (for example, the interval (0, 2)). Here, note that the value of the limit does not depend on ''f'' being defined at ''p'', nor on the value ''f''(''p'')—if it is defined. For example, :$\lim_ \frac = 3$ because for every ''ε'' > 0, we can take ''δ'' = ''ε''/2, so that for all real ''x'' ≠ 1, if 0 < , ''x'' − 1, < ''δ'', then , ''f''(''x'') − 3, < ''ε''. Note that here ''f''(1) is undefined. The letters ''ε'' and ''δ'' can be understood as "error" and "distance". In fact, Cauchy used ''ε'' as an abbreviation for "error" in some of his work, though in his definition of continuity, he used an infinitesimal $\alpha$ rather than either ''ε'' or ''δ'' (see ''Cours d'Analyse''). In these terms, the error (''ε'') in the measurement of the value at the limit can be made as small as desired, by reducing the distance (''δ'') to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that ''δ'' and ''ε'' represent distances helps suggest these generalizations.

## Existence and one-sided limits

Alternatively, ''x'' may approach ''p'' from above (right) or below (left), in which case the limits may be written as :$\lim_f\left(x\right) = L$ or :$\lim_f\left(x\right) = L$ respectively. If these limits exist at p and are equal there, then this can be referred to as ''the'' limit of ''f''(''x'') at ''p''. If the one-sided limits exist at ''p'', but are unequal, then there is no limit at ''p'' (i.e., the limit at ''p'' does not exist). If either one-sided limit does not exist at ''p'', then the limit at ''p'' also does not exist. A formal definition is as follows. The limit of ''f'' as ''x'' approaches ''p'' from above is ''L'' if: :For every ''ε'' > 0, there exists a ''δ'' > 0 such that whenever 0 < ''x'' − ''p'' < ''δ'', we have , ''f''(''x'') − ''L'',  < ''ε''. :$\left(\forall \varepsilon > 0 \right) \, \left(\exists \delta > 0\right) \, \left(\forall x \in \left(a,b\right)\right)\, \left(0 < x - p < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. The limit of ''f'' as ''x'' approaches ''p'' from below is ''L'' if: :For every ''ε'' > 0, there exists a ''δ'' > 0 such that whenever 0 < ''p'' − ''x'' < ''δ'', we have , ''f''(''x'') − ''L'',  < ''ε''. :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \, \left(\forall x \in \left(a,b\right)\right) \, \left(0 < p - x < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. If the limit does not exist, then the
oscillation Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulu ...
of ''f'' at ''p'' is non-zero.

## More general subsets

Apart from open intervals, limits can be defined for functions on arbitrary subsets of R, as follows : let $f : S \to \R$ be a real-valued function defined on arbitrary $S \subseteq \R$. Let ''p'' be a limit point of ''S''—that is, ''p'' is the limit of some sequence of elements of ''S'' distinct from p. Then we say the limit of ''f'', as ''x'' approaches ''p'' from values in ''S'', is ''L'', written :$\lim_ f\left(x\right) = L$ if the following holds: :For every , there exists a such that for all , implies that . :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \,\left(\forall x \in S\right)\, \left(0 < , x - p, < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. The condition that ''f'' be defined on ''S'' is that ''S'' be a subset of the domain of ''f''. This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking ''S'' to be an open interval of the form $\left(-\infty,a\right)$), and right-handed limits (e.g., by taking ''S'' to be an open interval of the form $\left(a,\infty\right)$). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function ''f''(''x'') = can have limit 0 as x approaches 0 from above: :$\lim_ \sqrt = 0$ since for every ''ε'' > 0, we may take ''δ'' = ''ε'' such that for all ''x'' ≥ 0, if 0 < , ''x'' − 0, < ''δ'', then , ''f''(''x'') − 0, < ''ε''.

## Deleted versus non-deleted limits

The definition of limit given here does not depend on how (or whether) ''f'' is defined at ''p''. refers to this as a ''deleted limit'', because it excludes the value of ''f'' at ''p''. The corresponding non-deleted limit does depend on the value of ''f'' at ''p'', if ''p'' is in the domain of ''f''. Let $f : S \to \R$ be a real-valued function. The non-deleted limit of ''f'', as ''x'' approaches ''p'', is ''L'' if :For every ''ε'' > ''0'', there exists a ''δ'' > ''0'' such that for all , implies :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \, \left(\forall x \in S\right)\, \left(, x - p, < \delta \implies , f\left(x\right) - L, < \varepsilon\right)$. The definition is the same, except that the neighborhood now includes the point ''p'', in contrast to the deleted neighborhood . This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits) (). notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular. For example, , , , , all take "limit" to mean the deleted limit.

## Examples

### Non-existence of one-sided limit(s)

The function :$f\left(x\right)=\begin \sin\frac & \text x<1 \\ 0 & \text x=1 \\ \frac& \text x>1 \end$ has no limit at $x_0 = 1$ (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function), but has a limit at every other ''x''-coordinate. The function :$f\left(x\right)=\begin 1 & x \text \\ 0 & x \text \end$ (a.k.a., the Dirichlet function) has no limit at any ''x''-coordinate.

### Non-equality of one-sided limits

The function :$f\left(x\right)=\begin 1 & \text x < 0 \\ 2 & \text x \ge 0 \end$ has a limit at every non-zero ''x''-coordinate (the limit equals 1 for negative ''x'' and equals 2 for positive ''x''). The limit at ''x'' = 0 does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2).

### Limits at only one point

The functions :$f\left(x\right)=\begin x & x \text \\ 0 & x \text \end$ and :$f\left(x\right)=\begin , x, & x \text \\ 0 & x \text \end$ both have a limit at ''x'' = 0 and it equals 0.

### Limits at countably many points

The function :$f\left(x\right)=\begin \sin x & x \text \\ 1 & x \text \end$ has a limit at any ''x''-coordinate of the form $\frac + 2n\pi$, where ''n'' is any integer.

# Limits involving infinity

## Limits at infinity Let $f:S \to\mathbb$ be a function defined on $S\subseteq\mathbb$. The limit of ''f'' as ''x'' approaches infinity is ''L'', denoted :$\lim_f\left(x\right) = L$, means that: : For every ''ε'' > 0, there exists a ''c'' > 0 such that whenever ''x'' > ''c'', we have , ''f''(''x'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0 \right)\, \left(\exists c > 0\right) \,\left(\forall x \in S\right) \,\left(x > c \implies , f\left(x\right) - L, < \varepsilon\right)$. Similarly, the limit of ''f'' as ''x'' approaches minus infinity is ''L'', denoted :$\lim_f\left(x\right) = L$, means that: : For every ''ε'' > 0, there exists a ''c'' > 0 such that whenever ''x'' < −''c'', we have , ''f''(''x'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\exists c > 0\right) \,\left(\forall x \in S\right)\, \left(x < -c \implies , f\left(x\right) - L, < \varepsilon\right)$. For example, :$\lim_ \left\left(-\frac + 4\right\right) = 4$ because for every ''ε'' > 0, we can take ''c'' = 3/''ε'' such that for all real ''x'', if ''x'' > ''c'', then , ''f''(''x'') − 4, < ''ε''. Another example is that :$\lim_e^ = 0$ because for every ''ε'' > 0, we can take ''c'' = max such that for all real ''x'', if ''x'' < −''c'', then , ''f''(''x'') − 0, < ''ε''.

## Infinite limits

For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values. Let $f:S \to\mathbb$ be a function defined on $S\subseteq\mathbb$. The statement the limit of ''f'' as ''x'' approaches ''p'' is infinity, denoted :$\lim_ f\left(x\right) = \infty,$ means that: : For every ''N'' > 0, there exists a ''δ'' > 0 such that whenever 0 < , ''x'' − ''p'', < ''δ'', we have ''f''(''x'') > ''N''. :$\left(\forall N > 0\right)\, \left(\exists \delta > 0\right)\, \left(\forall x \in S\right)\, \left(0 < , x-p , < \delta \implies f\left(x\right) > N\right)$. The statement the limit of ''f'' as ''x'' approaches ''p'' is minus infinity, denoted :$\lim_ f\left(x\right) = -\infty,$ means that: : For every ''N'' > 0, there exists a ''δ'' > 0 such that whenever 0 < , ''x'' − ''p'', < ''δ'', we have ''f''(''x'') < −''N''. :$\left(\forall N > 0\right) \, \left(\exists \delta > 0\right) \, \left(\forall x \in S\right)\, \left(0 < , x-p , < \delta \implies f\left(x\right) < -N\right)$. For example, :$\lim_ \frac = \infty$ because for every ''N'' > 0, we can take ''δ'' = 1/ such that for all real ''x'' > 0, if 0 < ''x'' − 1 < ''δ'', then ''f''(''x'') > ''N''. These ideas can be combined in a natural way to produce definitions for different combinations, such as :$\lim_ f\left(x\right) = \infty$, or $\lim_f\left(x\right) = -\infty$. For example, :$\lim_ \ln x = -\infty$ because for every ''N'' > 0, we can take ''δ'' = ''e''−''N'' such that for all real ''x'' > 0, if 0 < ''x'' − 0 < ''δ'', then ''f''(''x'') < −''N''. Limits involving infinity are connected with the concept of asymptotes. These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if *a neighborhood of −∞ is defined to contain an interval ��∞, ''c'') for some ''c'' ∈ R, *a neighborhood of ∞ is defined to contain an interval (''c'', ∞where ''c'' ∈ R, and *a neighborhood of ''a'' ∈ R is defined in the normal way metric space R. In this case, R is a topological space and any function of the form ''f'': ''X'' → ''Y'' with ''X'', ''Y''⊆ R is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense.

## Alternative notation

Many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as and the projectively extended real line is R ∪  where a neighborhood of ∞ is a set of the form The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases. As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, $x^$ does not possess a central limit (which is normal): :$\lim_ = +\infty, \lim_ = -\infty$. In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit ''does'' exist in that context: :$\lim_ = \lim_ = \lim_ = \infty$. In fact there are a plethora of conflicting formal systems in use. In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. A simple reason has to do with the converse of $\lim_ = -\infty$, namely, it is convenient for $\lim_ = -0$ to be considered true. Such zeroes can be seen as an approximation to
infinitesimal In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally refe ...
s.

## Limits at infinity for rational functions There are three basic rules for evaluating limits at infinity for a rational function ''f''(''x'') = ''p''(''x'')/''q''(''x''): (where ''p'' and ''q'' are polynomials): *If the degree of ''p'' is greater than the degree of ''q'', then the limit is positive or negative infinity depending on the signs of the leading coefficients; *If the degree of ''p'' and ''q'' are equal, the limit is the leading coefficient of ''p'' divided by the leading coefficient of ''q''; *If the degree of ''p'' is less than the degree of ''q'', the limit is 0. If the limit at infinity exists, it represents a horizontal asymptote at ''y'' = ''L''. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions.

# Functions of more than one variable

## Ordinary limits

By noting that , ''x'' − ''p'', represents a
distance Distance is a numerical or occasionally qualitative measurement of how far apart objects or points are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two counties over") ...
, the definition of a limit can be extended to functions of more than one variable. In the case of a function $f : S \times T \to \R$ defined on $S \times T \subseteq \R^2$, we defined the limit as follows: the limit of ''f'' as (''x'', ''y'') approaches (''p'', ''q'') is ''L'', written :$\lim_ f\left(x, y\right) = L$ if the following condition holds: : For every ''ε'' > 0, there exists a ''δ'' > 0 such that for all ''x'' in ''S'' and ''y'' in ''T'', whenever 0 < < ''δ'', we have , ''f''(''x'', ''y'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\exists \delta > 0\right)\, \left(\forall x \in S\right) \, \left(\forall y \in T\right)\, \left(0 < \sqrt < \delta \implies , f\left(x, y\right) - L, < \varepsilon\right)$. Here is the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
between (''x'', ''y'') and (''p'', ''q''). (This can in fact be replaced by any norm , , (''x'', ''y'') − (''p'', ''q''), , , and be extended to any number of variables.) For example, we may say :$\lim_ \frac = 0$ because for every ''ε'' > 0, we can take ''δ'' = such that for all real ''x'' ≠ 0 and real ''y'' ≠ 0, if 0 < < ''δ'', then , ''f''(''x'', ''y'') − 0, < ''ε''. Similar to the case in single variable, the value of ''f'' at (''p'', ''q'') does not matter in this definition of limit. For such a multivariable limit to exist, this definition requires the value of ''f'' approaches ''L'' along every possible path approaching (''p'', ''q''). In the above example, the function :$f\left(x, y\right) = \frac$ satisfies this condition. This can be seen by considering the
polar coordinates In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point (analogous to the o ...
(''x'', ''y'') = (''r'' cos(''θ''), ''r'' sin(''θ'')) → (0, 0), which gives :$\lim_ f\left(r \cos \theta, r \sin \theta\right) = \lim_ \frac = \lim_ r^2 \cos^4 \theta$. Here ''θ'' = ''θ''(''r'') is a function of ''r'' which controls the shape of the path along which ''f'' is approaching (''p'', ''q''). Since cos(''θ'') is bounded between ��1, 1 by the sandwich theorem, this limit tends to 0. In contrast, the function :$f\left(x, y\right) = \frac$ does not have a limit at (0, 0). Taking the path (''x'', ''y'') = (''t'', 0) → (0, 0), we obtain :$\lim_ f\left(t, 0\right) = \lim_ \frac = 0$, while taking the path (''x'', ''y'') = (''t'', ''t'') → (0, 0), we obtain :$\lim_ f\left(t, t\right) = \lim_ \frac = \frac$. Since the two values do not agree, ''f'' does not tend to a single value as (''x'', ''y'') approaches (0, 0).

## Multiple limits

Although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. For a two-variable function, this is the double limit. Let $f : S \times T \to \R$ be defined on $S \times T \subseteq \R^2$, we say the double limit of ''f'' as ''x'' approaches ''p'' and ''y'' approaches ''q'' is ''L'', written :$\lim_ f\left(x, y\right) = L$ if the following condition holds: : For every ''ε'' > 0, there exists a ''δ'' > 0 such that for all ''x'' in ''S'' and ''y'' in ''T'', whenever 0 < , ''x'' − ''p'', < ''δ'' and 0 < , ''y''−''q'', < ''δ'', we have , ''f''(''x'', ''y'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\exists \delta > 0\right)\, \left(\forall x \in S\right) \, \left(\forall y \in T\right)\, \left( \left(0 < , x-p, < \delta\right) \land \left(0 < , y-q, < \delta\right) \implies , f\left(x, y\right) - L, < \varepsilon\right)$. For such a double limit to exist, this definition requires the value of ''f'' approaches ''L'' along every possible path approaching (''p'', ''q''), excluding the two lines ''x'' = ''p'' and ''y'' = ''q''. As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals ''L'', then the multiple limit exists and also equals ''L''. Note that the converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example :$f\left(x,y\right) = \begin 1 \quad \text \quad xy \ne 0 \\ 0 \quad \text \quad xy = 0 \end$ where :$\lim_ f\left(x, y\right) = 1$ but :$\lim_ f\left(x, y\right)$ does not exists. If the domain of ''f'' is restricted to $\left(S\setminus\\right) \times \left(T\setminus\\right)$, then the two definitions of limits coincide.

## Multiple limits at infinity

The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For $f : S \times T \to \R$, we say the double limit of ''f'' as ''x'' and ''y'' approaches infinity is ''L'', written :$\lim_ f\left(x, y\right) = L$ if the following condition holds: : For every ''ε'' > 0, there exists a ''c'' > 0 such that for all ''x'' in ''S'' and ''y'', whenever ''x'' > ''c'' and ''y'' > ''c'', we have , ''f''(''x'', ''y'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\exists c> 0\right)\, \left(\forall x \in S\right) \, \left(\forall y \in T\right)\, \left( \left(x > c\right) \land \left(y > c\right) \implies , f\left(x, y\right) - L, < \varepsilon\right)$. We say the double limit of ''f'' as ''x'' and ''y'' approaches minus infinity is ''L'', written :$\lim_ f\left(x, y\right) = L$ if the following condition holds: : For every ''ε'' > 0, there exists a ''c'' > 0 such that ''x'' in ''S'' and ''y'' in ''T'', whenever ''x'' < −''c'' and ''y'' < −''c'', we have , ''f''(''x'', ''y'') − ''L'', < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\exists c> 0\right)\, \left(\forall x \in S\right) \, \left(\forall y \in T\right)\, \left( \left(x < -c\right) \land \left(y < -c\right) \implies , f\left(x, y\right) - L, < \varepsilon\right)$.

## Pointwise limits and uniform limits

Let $f : S \times T \to \R$. Instead of taking limit as (''x'', ''y'') → (''p'', ''q''), we may consider taking the limit of just one variable, say, ''x'' → ''p'', to obtain a single-variable function of ''y'', namely $g : T \to \R$. In fact, this limiting process can be done in two distinct ways. The first one is called pointwise limit. We say the pointwise limit of ''f'' as ''x'' approaches ''p'' is ''g'', denoted :$\lim_f\left(x, y\right) = g\left(y\right)$, or :$\lim_f\left(x, y\right) = g\left(y\right) \;\; \text$. Alternatively, we may say ''f'' tends to ''g'' pointwise as ''x'' approaches ''p'', denoted :$f\left(x, y\right) \to g\left(y\right) \;\; \text \;\; x \to p$, or :$f\left(x, y\right) \to g\left(y\right) \;\; \text \;\; \text \;\; x \to p$. This limit exists if the following holds: : For every ''ε'' > 0 and every fixed ''y'' in ''T'', there exists a ''δ''(''ε'', ''y'') > 0 such that for all ''x'' in ''S'', whenever 0 < , ''x'' − ''p'', < ''δ'', we have , ''f''(''x'', ''y'') − ''g''(''y''), < ''ε''. :$\left(\forall \varepsilon > 0\right)\, \left(\forall y \in T\right) \, \left(\exists \delta> 0\right)\, \left(\forall x \in S\right)\, \left( 0 < , x-p, < \delta \implies , f\left(x, y\right) - g\left(y\right), < \varepsilon\right)$. Here, ''δ'' = ''δ''(''ε'', ''y'') is a function of both ''ε'' and ''y''. Each ''δ'' is chosen for a ''specific point'' of ''y''. Hence we say the limit is pointwise in ''y''. For example, :$f\left(x, y\right) = \frac$ has a pointwise limit of constant zero function :$\lim_f\left(x, y\right) = 0\left(y\right) \;\; \text$ because for every fixed ''y'', the limit is clearly 0. Note that this argument fails if ''y'' is not fixed: if ''y'' is very close to ''π''/2, the value of the fraction may deviate from 0. This leads to another definition of limit, namely the uniform limit. We say the uniform limit of ''f'' on ''T'' as ''x'' approaches ''p'' is ''g'', denoted :$\underset f\left(x, y\right) = g\left(y\right)$, or :$\lim_f\left(x, y\right) = g\left(y\right) \;\; \text \; T$. Alternatively, we may say ''f'' tends to ''g'' uniformly on ''T'' as ''x'' approaches ''p'', denoted :$f\left(x, y\right) \rightrightarrows g\left(y\right) \; \text \; T \;\; \text \;\; x \to p$, or :$f\left(x, y\right) \to g\left(y\right) \;\; \text\; T \;\; \text \;\; x \to p$. This limit exists if the following holds: : For every ''ε'' > 0, there exists a ''δ''(''ε'') > 0 such that for all ''x'' in ''S'' and ''y'' in ''T'', whenever 0 < , ''x'' − ''p'', < ''δ'', we have , ''f''(''x'', ''y'') − ''g''(''y''), < ''ε''. :$\left(\forall \varepsilon > 0\right) \, \left(\exists \delta > 0\right)\, \left(\forall x \in S\right)\, \left(\forall y \in T\right)\, \left( 0 < , x-p, < \delta \implies , f\left(x, y\right) - g\left(y\right), < \varepsilon\right)$. Here, ''δ'' = ''δ''(''ε'') is a function of only ''ε'' but not ''y''. In other words, ''δ'' is ''uniformly applicable'' to all ''y'' in ''T''. Hence we say the limit is uniform in ''y''. For example, :$f\left(x, y\right) = x \cos y$ has a uniform limit of constant zero function :$\lim_f\left(x, y\right) = 0\left(y\right) \;\; \text\; \R$ because for all real ''y'', cos(''y'') is bounded between ��1, 1 Hence no matter how ''y'' behaves, we may use the sandwich theorem to show that the limit is 0.

## Iterated limits

Let $f : S \times T \to \R$. We may consider taking the limit of just one variable, say, ''x'' → ''p'', to obtain a single-variable function of ''y'', namely $g : T \to \R$, and then take limit in the other variable, namely ''y'' → ''q'', to get a number $L$. Symbolically, :$\lim_ \lim_ f\left(x, y\right) = \lim_ g\left(y\right) = L$. This limit is known as iterated limit of the multivariable function. Note that the order of taking limits may affect the result, i.e., :$\lim_ \lim_ f\left(x,y\right) \ne \lim_ \lim_ f\left(x, y\right)$ in general. A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit $\lim_f\left(x, y\right) = g\left(y\right)$ to be uniform on ''T''.

# Functions on metric spaces

Suppose ''M'' and ''N'' are subsets of
metric spaces In mathematics, a metric space is a set together with a notion of ''distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general settin ...
''A'' and ''B'', respectively, and ''f'' : ''M'' → ''N'' is defined between ''M'' and ''N'', with ''x'' ∈ ''M,'' ''p'' a limit point of ''M'' and ''L'' ∈ ''N''. It is said that the limit of ''f'' as ''x'' approaches ''p'' is ''L'' and write :$\lim_f\left(x\right) = L$ if the following property holds: :For every , there exists a such that for all points , implies . :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \,\left(\forall x \in M\right) \,\left(0 < d_A\left(x, p\right) < \delta \implies d_B\left(f\left(x\right), L\right) < \varepsilon\right)$. Again, note that ''p'' need not be in the domain of ''f'', nor does ''L'' need to be in the range of ''f'', and even if ''f''(''p'') is defined it need not be equal to ''L''.

## Euclidean metric

The limit in
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean ...
is a direct generalization of limits to
vector-valued functions A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could b ...
. For example, we may consider a function $f:S \times T \to \R^3$ such that :$f\left(x, y\right) = \left(f_1\left(x, y\right), f_2\left(x, y\right), f_3\left(x, y\right) \right)$. Then, under the usual
Euclidean metric In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
, :$\lim_ f\left(x, y\right) = \left(L_1, L_2, L_3\right)$ if the following holds: :For every , there exists a such that for all ''x'' in ''S'' and ''y'' in ''T'', implies . :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \, \left(\forall x \in S\right) \, \left(\forall y \in T\right)\, \left(0 < \sqrt < \delta \implies \sqrt < \varepsilon\right)$. In this example, the function concerned are finite-
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordin ...
vector-valued function. In this case, the limit theorem for vector-valued function states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit: :$\lim_ \left\left(f_1\left(x, y\right), f_2\left(x, y\right), f_3\left(x, y\right)\right\right) = \left\left(\lim_f_1\left(x, y\right), \lim_f_2\left(x, y\right), \lim_f_3\left(x, y\right)\right\right)$.

## Manhattan metric

One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider $f:S \to \R^2$ such that :$f\left(x\right) = \left(f_1\left(x\right), f_2\left(x\right)\right)$. Then, under the Manhattan metric, :$\lim_ f\left(x\right) = \left(L_1, L_2\right)$ if the following holds: :For every , there exists a such that for all ''x'' in ''S'', implies . :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \,\left(\forall x \in S\right) \,\left(0 < , x - p, < \delta \implies , f_1 - L_1, + , f_2 - L_2, < \varepsilon\right)$. Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies.

## Uniform metric

Finally, we will discuss the limit in
function space In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set into a ve ...
, which has infinite dimensions. Consider a function ''f''(''x'', ''y'') in the function space $S \times T \to \R$. We want to find out as ''x'' approaches ''p'', how ''f''(''x'', ''y'') will tend to another function ''g''(''y''), which is in the function space $T \to \R$. The "closeness" in this function space may be measured under the uniform metric. Then, we will say the uniform limit of ''f'' on ''T'' as ''x'' approaches ''p'' is ''g'' and write :$\underset f\left(x, y\right) = g\left(y\right)$, or :$\lim_f\left(x, y\right) = g\left(y\right) \;\; \text \; T$, if the following holds: :For every , there exists a such that for all ''x'' in ''S'', implies . :$\left(\forall \varepsilon > 0 \right)\, \left(\exists \delta > 0\right) \,\left(\forall x \in S\right) \,\left(0 < , x-p, < \delta \implies \sup_ , f\left(x, y\right) - g\left(y\right) , < \varepsilon\right)$. In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section.

# Functions on topological spaces

Suppose ''X'',''Y'' are
topological space In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called poi ...
s with ''Y'' a
Hausdorff space In topology and related branches of mathematics, a Hausdorff space ( , ), separated space or T2 space is a topological space where, for any two distinct points, there exist neighbourhoods of each which are disjoint from each other. Of the man ...
. Let ''p'' be a limit point of Ω ⊆ ''X'', and ''L'' ∈''Y''. For a function ''f'' : Ω → ''Y'', it is said that the limit of ''f'' as ''x'' approaches ''p'' is ''L'', written :$\lim_f\left(x\right) = L$, if the following property holds: :For every open
neighborhood A neighbourhood (British English, Irish English, Australian English and Canadian English) or neighborhood (American English; see spelling differences) is a geographically localised community within a larger city, town, suburb or rural ...
''V'' of ''L'', there exists an open neighborhood ''U'' of ''p'' such that ''f''(''U'' ∩ Ω − ) ⊆ ''V''. This last part of the definition can also be phrased "there exists an open punctured neighbourhood ''U'' of ''p'' such that ''f''(''U''∩Ω) ⊆ ''V'' ". Note that the domain of ''f'' does not need to contain ''p''. If it does, then the value of ''f'' at ''p'' is irrelevant to the definition of the limit. In particular, if the domain of ''f'' is ''X'' −  (or all of ''X''), then the limit of ''f'' as ''x'' → ''p'' exists and is equal to ''L'' if, for all subsets Ω of ''X'' with limit point ''p'', the limit of the restriction of ''f'' to Ω exists and is equal to ''L''. Sometimes this criterion is used to establish the ''non-existence'' of the two-sided limit of a function on R by showing that the
one-sided limit In calculus, a one-sided limit refers to either one of the two limits of a function f(x) of a real variable x as x approaches a specified point either from the left or from the right. The limit as x decreases in value approaching a (x approache ...
s either fail to exist or do not agree. Such a view is fundamental in the field of
general topology In mathematics, general topology is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geome ...
, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. Alternatively, the requirement that ''Y'' be a Hausdorff space can be relaxed to the assumption that ''Y'' be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about ''the limit'' of a function at a point, but rather ''a limit'' or ''the set of limits'' at a point. A function is continuous at a limit point ''p'' of and in its domain if and only if ''f''(''p'') is ''the'' (or, in the general case, ''a'') limit of ''f''(''x'') as ''x'' tends to ''p''. There is another type of limit of a function, namely the sequential limit. Let be a mapping from a topological space ''X'' into a Hausdorff space ''Y'', a limit point of ''X'' and . The sequential limit of ''f'' as ''x'' tends to ''p'' is ''L'' if :For every
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is call ...
(''x''''n'') in that converges to ''p'', the sequence ''f''(''x''''n'') converges to ''L''. If ''L'' is the limit (in the sense above) of ''f'' as ''x'' approaches ''p'', then it is a sequential limit as well, however the converse need not hold in general. If in addition ''X'' is metrizable, then ''L'' is the sequential limit of ''f'' as ''x'' approaches ''p'' if and only if it is the limit (in the sense above) of ''f'' as ''x'' approaches ''p''.

# Other characterizations

## In terms of sequences

For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting: :$\lim_f\left(x\right)=L$ if, and only if, for all sequences $x_n$ (with $x_n$ not equal to ''a'' for all ''n'') converging to $a$ the sequence $f\left(x_n\right)$ converges to $L$. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the
axiom of choice In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that ''a Cartesian product of a collection of non-empty sets is non-empty''. Informally put, the axiom of choice says that given any collection ...
. Note that defining what it means for a sequence $x_n$ to converge to $a$ requires the epsilon, delta method. Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let ''f'' be a real-valued function with the domain ''Dm''(''f''). Let ''a'' be the limit of a sequence of elements of ''Dm''(''f'') \ . Then the limit (in this sense) of ''f'' is ''L'' as ''x'' approaches ''p'' if for every sequence $x_n$ ∈ ''Dm''(''f'') \  (so that for all ''n'', $x_n$ is not equal to ''a'') that converges to ''a'', the sequence $f\left(x_n\right)$ converges to $L$. This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset ''Dm''(''f'') of R as a metric space with the induced metric.

## In non-standard calculus

In non-standard calculus the limit of a function is defined by: :$\lim_f\left(x\right)=L$ if and only if for all $x\in \mathbb^*$, $f^*\left(x\right)-L$ is infinitesimal whenever $x-a$ is infinitesimal. Here $\mathbb^*$ are the
hyperreal number In mathematics, the system of hyperreal numbers is a way of treating infinite and infinitesimal (infinitely small but non-zero) quantities. The hyperreals, or nonstandard reals, *R, are an extension of the real numbers R that contains numbe ...
s and $f^*$ is the natural extension of ''f'' to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full. Bŀaszczyk et al. detail the usefulness of
microcontinuity In nonstandard analysis, a discipline within classical mathematics, microcontinuity (or ''S''-continuity) of an internal function ''f'' at a point ''a'' is defined as follows: :for all ''x'' infinitely close to ''a'', the value ''f''(''x'') is in ...
in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".

## In terms of nearness

At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point $x$ is defined to be near a set $A\subseteq \mathbb$ if for every $r>0$ there is a point $a\in A$ so that

# Relationship to continuity

The notion of the limit of a function is very closely related to the concept of continuity. A function ''ƒ'' is said to be continuous at ''c'' if it is both defined at ''c'' and its value at ''c'' equals the limit of ''f'' as ''x'' approaches ''c'': : $\lim_ f\left(x\right) = f\left(c\right).$ (We have here assumed that ''c'' is a limit point of the domain of ''f''.)

# Properties

If a function ''f'' is real-valued, then the limit of ''f'' at ''p'' is ''L'' if and only if both the right-handed limit and left-handed limit of ''f'' at ''p'' exist and are equal to ''L''. The function ''f'' is continuous at ''p'' if and only if the limit of ''f''(''x'') as ''x'' approaches ''p'' exists and is equal to ''f''(''p''). If ''f'' : ''M'' → ''N'' is a function between metric spaces ''M'' and ''N'', then it is equivalent that ''f'' transforms every sequence in ''M'' which converges towards ''p'' into a sequence in ''N'' which converges towards ''f''(''p''). If ''N'' is a
normed vector space In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers, on which a norm is defined. A norm is the formalization and the generalization to real vector spaces of the intuitive notion of "length" ...
, then the limit operation is linear in the following sense: if the limit of ''f''(''x'') as ''x'' approaches ''p'' is ''L'' and the limit of ''g''(''x'') as ''x'' approaches ''p'' is ''P'', then the limit of ''f''(''x'') + g(''x'') as ''x'' approaches ''p'' is ''L'' + ''P''. If ''a'' is a scalar from the base field, then the limit of ''af''(''x'') as ''x'' approaches ''p'' is ''aL''. If ''f'' and ''g'' are real-valued (or complex-valued) functions, then taking the limit of an operation on ''f''(''x'') and ''g''(''x'') (e.g., $f+g$'','' $f-g$'','' $f\times g$'','' $f/g$'','' $f^g$) under certain conditions is compatible with the operation of limits of ''f(x)'' and ''g(x)''. This fact is often called the algebraic limit theorem. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite). :$\begin \lim\limits_ & \left(f\left(x\right) + g\left(x\right)\right) & = & \lim\limits_ f\left(x\right) + \lim\limits_ g\left(x\right) \\ \lim\limits_ & \left(f\left(x\right) - g\left(x\right)\right) & = & \lim\limits_ f\left(x\right) - \lim\limits_ g\left(x\right) \\ \lim\limits_ & \left(f\left(x\right)\cdot g\left(x\right)\right) & = & \lim\limits_ f\left(x\right) \cdot \lim\limits_ g\left(x\right) \\ \lim\limits_ & \left(f\left(x\right)/g\left(x\right)\right) & = & \\ \lim\limits_ & f\left(x\right)^ & = & \end$ These rules are also valid for one-sided limits, including when ''p'' is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules. *''q'' + ∞ = ∞ if ''q'' ≠ −∞ *''q'' × ∞ = ∞ if ''q'' > 0 *''q'' × ∞ = −∞ if ''q'' < 0 *''q'' / ∞ = 0 if ''q'' ≠ ∞ and ''q'' ≠ −∞ *∞''q'' = 0 if ''q'' < 0 *∞''q'' = ∞ if ''q'' > 0 *''q'' = 0 if 0 < ''q'' < 1 *''q'' = ∞ if ''q'' > 1 *''q''−∞ = ∞ if 0 < ''q'' < 1 *''q''−∞ = 0 if ''q'' > 1 (see also Extended real number line). In other cases the limit on the left may still exist, although the right-hand side, called an ''
indeterminate form In calculus and other branches of mathematical analysis, limits involving an algebraic combination of functions in an independent variable may often be evaluated by replacing these functions by their limits; if the expression obtained after this s ...
'', does not allow one to determine the result. This depends on the functions ''f'' and ''g''. These indeterminate forms are: * 0 / 0 * ±∞ / ±∞ * 0 × ±∞ * ∞ + −∞ * 00 * ∞0 * 1±∞ See further L'Hôpital's rule below and
Indeterminate form In calculus and other branches of mathematical analysis, limits involving an algebraic combination of functions in an independent variable may often be evaluated by replacing these functions by their limits; if the expression obtained after this s ...
.

## Limits of compositions of functions

In general, from knowing that :$\lim_ f\left(y\right) = c$ and $\lim_ g\left(x\right) = b$, it does ''not'' follow that $\lim_ f\left(g\left(x\right)\right) = c$. However, this "chain rule" does hold if one of the following ''additional'' conditions holds: *''f''(''b'') = ''c'' (that is, ''f'' is continuous at ''b''), or *''g'' does not take the value ''b'' near ''a'' (that is, there exists a $\delta >0$ such that if $0<, x-a, <\delta$ then $, g\left(x\right)-b, >0$). As an example of this phenomenon, consider the following function that violates both additional restrictions: :$f\left(x\right)=g\left(x\right)=\begin0 & \text x\neq 0 \\ 1 & \text x=0 \end.$ Since the value at ''f''(0) is a
removable discontinuity Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set ...
, :$\lim_ f\left(x\right) = 0$ for all $a$. Thus, the naïve chain rule would suggest that the limit of ''f''(''f''(''x'')) is 0. However, it is the case that :$f\left(f\left(x\right)\right)=\begin1 & \text x\neq 0 \\ 0 & \text x=0 \end$ and so :$\lim_ f\left(f\left(x\right)\right) = 1$ for all $a$.

## Limits of special interest

### Rational functions

For $n$ a nonnegative integer and constants $a_1, a_2, a_3,\ldots, a_n$ and $b_1, b_2, b_3,\ldots, b_n$, *$\lim_ \frac = \frac$ This can be proven by dividing both the numerator and denominator by $x^$. If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0.

### Trigonometric functions

*$\lim_ \frac = 1$ *$\lim_ \frac = 0$

### Exponential functions

*$\lim_ \left(1+x\right)^ = \lim_ \left\left(1+\frac\right\right)^ = e$ *$\lim_ \frac = 1$ *$\lim_ \frac = \frac$ *$\lim_ \frac = \frac\ln c$ *$\lim_ x^ = 1$

### Logarithmic functions

*$\lim_ \frac = 1$ *$\lim_ \frac = \frac$ *$\lim_ \frac = \frac$

## L'Hôpital's rule

This rule uses
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
s to find limits of indeterminate forms or , and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions and , defined over an
open interval In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. For example, the set of numbers satisfying is an interval which contains , , and all numbers in between. Othe ...
containing the desired limit point ''c'', then if: # $\lim_f\left(x\right)=\lim_g\left(x\right)=0,$ or $\lim_f\left(x\right)=\pm\lim_g\left(x\right) = \pm\infty$, and # $f$ and $g$ are differentiable over $I \setminus \$, and # $g\text{'}\left(x\right)\neq 0$ for all $x \in I \setminus \$, and # $\lim_\frac$ exists, then: :$\lim_ \frac = \lim_ \frac$. Normally, the first condition is the most important one. For example: $\lim_ \frac = \lim_ \frac = \frac = \frac.$

## Summations and integrals

Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit. A short way to write the limit $\lim_ \sum_^n f\left(i\right)$ is $\sum_^\infty f\left(i\right)$. An important example of limits of sums such as these are
series Series may refer to: People with the name * Caroline Series (born 1951), English mathematician, daughter of George Series * George Series (1920–1995), English physicist Arts, entertainment, and media Music * Series, the ordered sets used i ...
. A short way to write the limit $\lim_ \int_a^x f\left(t\right) \; dt$ is $\int_a^\infty f\left(t\right) \; dt$. A short way to write the limit $\lim_ \int_x^b f\left(t\right) \; dt$ is $\int_^b f\left(t\right) \; dt$.

* * * * * * * * *

# References

* * * * * * . * * * *