Set Exponentiation
   HOME

TheInfoList



OR:

In mathematics, exponentiation, denoted , is an operation (mathematics), operation involving two numbers: the ''base'', , and the ''exponent'' or ''power'', . When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, is the product (mathematics), product of multiplying bases: b^n = \underbrace_.In particular, b^1=b. The exponent is usually shown as a superscript to the right of the base as or in computer code as b^n. This binary operation is often read as " to the power "; it may also be referred to as " raised to the th power", "the th power of ", or, most briefly, " to the ". The above definition of b^n immediately implies several properties, in particular the multiplication rule:There are three common notations for multiplication: x\times y is most commonly used for explicit numbers and at a very elementary level; xy is most common when variable (mathematics), variables are used; x\cdot y is used for emphasizing that one talks of multiplication or when omitting the multiplication sign would be confusing. \begin b^n \times b^m & = \underbrace_ \times \underbrace_ \\[1ex] & = \underbrace_ \ =\ b^ . \end That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero gives b^0 \times b^n = b^ = b^n, and, where is non-zero, dividing both sides by b^n gives b^0 = b^n / b^n = 1. That is the multiplication rule implies the definition b^0=1. A similar argument implies the definition for negative integer powers: b^ = 1/b^n.That is, extending the multiplication rule gives b^ \times b^n = b^ = b^0 = 1 . Dividing both sides by b^n gives b^ = 1 / b^n. This also implies the definition for fractional powers: b^ = \sqrt[m].For example, b^ \times b^ = b^ = b^1 = b , meaning (b^)^2 = b , which is the definition of square root: b^ = \sqrt . The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to define b^x for any positive real base b and any real number exponent x. More involved definitions allow complex numbers, complex base and exponent, as well as certain types of matrix (mathematics), matrices as base or exponent. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography.


Etymology

The term ''exponent'' originates from the Latin ''exponentem'', the present participle of ''exponere'', meaning "to put forth". The term ''power'' () is a mistranslation of the ancient Greek δύναμις (''dúnamis'', here: "amplification") used by the Greek mathematics, Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. The word ''exponent'' was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms "square", "cube", "zenzizenzic" (fourth power), "sursolid" (fifth power (algebra), fifth), "zenzicube" (sixth power, sixth), "second sursolid" (seventh power, seventh), and "zenzizenzizenzic" (eighth power, eighth). "Biquadrate" has been used to refer to the fourth power as well.


History

In ''The Sand Reckoner'', Archimedes proved the law of exponents, , necessary to manipulate powers of . He then used powers of to estimate the number of grains of sand that can be contained in the universe. In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (''māl'', "possessions", "property") for a Square (algebra), square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (''Kaaba, Kaʿbah'', "cube") for a cube (algebra), cube, which later Mathematics in the medieval Islamic world, Islamic mathematicians represented in mathematical notation as the letters ''mīm'' (m) and ''kāf'' (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi. Nicolas Chuquet used a form of exponential notation in the 15th century, for example to represent . This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example for . In 1636, James Hume (mathematician), James Hume used in essence modern notation, when in ''L'algèbre de Viète'' he wrote for . Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled ''La Géométrie''; there, the notation is introduced in Book I. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as . Samuel Jeake introduced the term ''indices'' in 1696. The term ''involution'' was used synonymously with the term ''indices'', but had declined in usage and should not be confused with involution (mathematics), its more common meaning. In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing:


20th century

As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For example Konrad Zuse introduced floating-point arithmetic in his 1938 computer Z1. One register (computer), register contained representation of leading digits, and a second contained representation of the exponent of 10. Earlier Leonardo Torres Quevedo contributed ''Essays on Automation'' (1914) which had suggested the floating-point representation of numbers. The more flexible decimal floating-point representation was introduced in 1946 with a Bell Laboratories computer. Eventually educators and engineers adopted scientific notation of numbers, consistent with common reference to order of magnitude in a ratio scale. For instance, in 1961 the School Mathematics Study Group developed the notation in connection with units used in the metric system. Exponents also came to be used to describe units of measurement and quantity dimensions. For instance, since force is mass times acceleration, it is measured in kg m/sec2. Using M for mass, L for length, and T for time, the expression M L T–2 is used in dimensional analysis to describe force.


Terminology

The expression is called "the square (algebra), square of " or " squared", because the area of a square with side-length is . (It is true that it could also be called " to the second power", but "the square of " and " squared" are more traditional) Similarly, the expression is called "the cube (algebra), cube of " or " cubed", because the volume of a cube with side-length is . When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, . The base appears times in the multiplication, because the exponent is . Here, is the ''5th power of 3'', or ''3 raised to the 5th power''. The word "raised" is usually omitted, and sometimes "power" as well, so can be simply read "3 to the 5th", or "3 to the 5".


Integer exponents

The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations.


Positive exponents

The definition of the exponentiation as an iterated multiplication can be formal proof, formalized by using mathematical induction, induction, and this definition can be used as soon as one has an associativity, associative multiplication: The base case is : b^1 = b and the recurrence relation, recurrence is : b^ = b^n \cdot b. The associativity of multiplication implies that for any positive integers and , : b^ = b^m \cdot b^n, and : (b^m)^n=b^.


Zero exponent

As mentioned earlier, a (nonzero) number raised to the power is : : b^0=1. This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an multiplicative identity, identity. This way the formula : b^=b^m\cdot b^n also holds for n=0. The case of is controversial. In contexts where only integer powers are considered, the value is generally assigned to but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context.


Negative exponents

Exponentiation with negative exponents is defined by the following identity, which holds for any integer and nonzero : : b^ = \frac. Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (\infty). This definition of exponentiation with negative exponents is the only one that allows extending the identity b^=b^m\cdot b^n to negative exponents (consider the case m=-n). The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted (for example, the square matrix, square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element is standardly denoted x^.


Identities and properties

The following identity (mathematics), identities, often called , hold for all integer exponents, provided that the base is non-zero: : \begin b^m \cdot b^n &= b^ \\ \left(b^m\right)^n &= b^ \\ b^n \cdot c^n &= (b \cdot c)^n \end Unlike addition and multiplication, exponentiation is not commutative: for example, 2^3 = 8, but reversing the operands gives the different value 3^2=9. Also unlike addition and multiplication, exponentiation is not associative: for example, , whereas . Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or ''right''-associative), not bottom-up (or ''left''-associative). That is, : b^ = b^, which, in general, is different from : \left(b^p\right)^q = b^ .


Powers of a sum

The powers of a sum can normally be computed from the powers of the summands by the binomial formula : (a+b)^n=\sum_^n \binoma^ib^=\sum_^n \fraca^ib^. However, this formula is true only if the summands commute (i.e. that ), which is implied if they belong to a algebraic structure, structure that is commutative property, commutative. Otherwise, if and are, say, square matrix, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes instead of ) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation.


Combinatorial interpretation

For nonnegative integers and , the value of is the number of function (mathematics), functions from a set (mathematics), set of elements to a set of elements (see cardinal exponentiation). Such functions can be represented as -tuples from an -element set (or as -letter words from an -letter alphabet). Some examples for particular values of and are given in the following table:


Particular bases


Powers of ten

In the base ten (decimal) number system, integer powers of are written as the digit followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, and . Exponentiation with base is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximation, approximated as . SI prefixes based on powers of are also used to describe small or large quantities. For example, the prefix Kilo-, kilo means , so a kilometre is .


Powers of two

The first negative powers of have special names: 2^is a ''one half, half''; 2^ is a ''4 (number), quarter.'' Powers of appear in set theory, since a set with members has a power set, the set of all of its subsets, which has members. Integer powers of are important in computer science. The positive integer powers give the number of possible values for an -bit integer binary number; for example, a byte may take different values. The binary number system expresses any number as a sum of powers of , and denotes it as a sequence of and , separated by a binary point, where indicates a power of that appears in the sum; the exponent is determined by the place of this : the nonnegative exponents are the rank of the on the left of the point (starting from ), and the negative exponents are determined by the rank on the right of the point.


Powers of one

Every power of one equals: .


Powers of zero

For a positive exponent , the th power of zero is zero: . For a negative exponent, 0^=1/0^n=1/0 is undefined. In some contexts (e.g., combinatorics), the expression zero to the power of zero, is defined to be equal to 1; in others (e.g., Mathematical analysis, analysis), it is often undefined.


Powers of negative one

Since a negative number times another negative is positive, we have:
(-1)^n = \left\}


Irrationality and transcendence

If is a positive real algebraic number, and is a rational number, then is an algebraic number. This results from the theory of algebraic extensions. This remains true if is any algebraic number, in which case, all values of (as a multivalued function) are algebraic. If is irrational number, irrational (that is, ''not rational''), and both and are algebraic, Gelfond–Schneider theorem asserts that all values of are transcendental number, transcendental (that is, not algebraic), except if equals or . In other words, if is irrational and b\not\in \, then at least one of , and is transcendental.


Integer powers in algebra

The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication.More generally, power associativity is sufficient for the definition. The definition of requires further the existence of a multiplicative identity. An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by is a monoid. In such a monoid, exponentiation of an element is defined inductively by * x^0 = 1, * x^ = x x^n for every nonnegative integer . If is a negative integer, x^n is defined only if has a multiplicative inverse. In this case, the inverse of is denoted , and is defined as \left(x^\right)^. Exponentiation with integer exponents obeys the following laws, for and in the algebraic structure, and and integers: : \begin x^0&=1\\ x^&=x^m x^n\\ (x^m)^n&=x^\\ (xy)^n&=x^n y^n \quad \text xy=yx, \text \end These definitions are widely used in many areas of mathematics, notably for group (mathematics), groups, ring (mathematics), rings, field (mathematics), fields, square matrix, square matrices (which form a ring). They apply also to function (mathematics), functions from a set (mathematics), set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure. When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if is a real function whose valued can be multiplied, f^n denotes the exponentiation with respect of multiplication, and f^ may denote exponentiation with respect of function composition. That is, : (f^n)(x)=(f(x))^n=f(x) \,f(x) \cdots f(x), and : (f^)(x)=f(f(\cdots f(f(x))\cdots)). Commonly, (f^n)(x) is denoted f(x)^n, while (f^)(x) is denoted f^n(x).


In a group

A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse. So, if is a group, x^n is defined for every x\in G and every integer . The set of all powers of an element of a group form a subgroup. A group (or subgroup) that consists of all powers of a specific element is the cyclic group generated by . If all the powers of are distinct, the group is isomorphic to the additive group \Z of the integers. Otherwise, the cyclic group is finite group, finite (it has a finite number of elements), and its number of elements is the order (group theory), order of . If the order of is , then x^n=x^0=1, and the cyclic group generated by consists of the first powers of (starting indifferently from the exponent or ). Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the ''order'' of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups. Superscript notation is also used for conjugacy class, conjugation; that is, , where and are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely (g^h)^k=g^ and (gh)^k=g^kh^k.


In a ring

In a ring (mathematics), ring, it may occur that some nonzero elements satisfy x^n=0 for some integer . Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal (ring theory), ideal, called the nilradical of a ring, nilradical of the ring. If the nilradical is reduced to the zero ideal (that is, if x\neq 0 implies x^n\neq 0 for every positive integer ), the commutative ring is said to be reduced ring, reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring. More generally, given an ideal in a commutative ring , the set of the elements of that have a power in is an ideal, called the radical of an ideal, radical of . The nilradical is the radical of the zero ideal. A radical ideal is an ideal that equals its own radical. In a polynomial ring k[x_1, \ldots, x_n] over a field (mathematics), field , an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz).


Matrices and linear operators

If is a square matrix, then the product of with itself times is called the matrix power. Also A^0 is defined to be the identity matrix, and if is invertible, then A^ = \left(A^\right)^n. Matrix powers appear often in the context of discrete dynamical systems, where the matrix expresses a transition from a state vector of some system to the next state of the system. This is the standard interpretation of a Markov chain, for example. Then A^2x is the state of the system after two time steps, and so forth: A^nx is the state of the system after time steps. The matrix power A^n is the transition matrix between the state now and the state at a time steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors. Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, d/dx, which is a linear operator acting on functions f(x) to give a new function (d/dx)f(x) = f'(x). The th power of the differentiation operator is the th derivative: : \left(\frac\right)^nf(x) = \fracf(x) = f^(x). These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of c0-semigroup, semigroups. Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.


Finite fields

A field (mathematics), field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of . Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite set, infinite. A ''finite field'' is a field with a finite set, finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form q=p^k, where is a prime number, and is a positive integer. For every such , there are fields with elements. The fields with elements are all isomorphic, which allows, in general, working as if there were only one field with elements, denoted \mathbb F_q. One has : x^q=x for every x\in \mathbb F_q. A primitive element (finite field), primitive element in \mathbb F_q is an element such that the set of the first powers of (that is, \) equals the set of the nonzero elements of \mathbb F_q. There are \varphi (p-1) primitive elements in \mathbb F_q, where \varphi is Euler's totient function. In \mathbb F_q, the freshman's dream identity : (x+y)^p = x^p+y^p is true for the exponent . As x^p=x in \mathbb F_q, It follows that the map : \begin F\colon & \mathbb F_q \to \mathbb F_q\\ & x\mapsto x^p \end is linear map, linear over \mathbb F_q, and is a field automorphism, called the Frobenius automorphism. If q=p^k, the field \mathbb F_q has automorphisms, which are the first powers (under function composition, composition) of . In other words, the Galois group of \mathbb F_q is cyclic group, cyclic of order , generated by the Frobenius automorphism. The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if is a primitive element in \mathbb F_q, then g^e can be efficiently computed with exponentiation by squaring for any , even if is large, while there is no known computationally practical algorithm that allows retrieving from g^e if is sufficiently large.


Powers of sets

The Cartesian product of two set (mathematics), sets and is the set of the ordered pairs (x,y) such that x\in S and y\in T. This operation is not properly commutative nor associative, but has these properties up to canonical map, canonical isomorphisms, that allow identifying, for example, (x,(y,z)), ((x,y),z), and (x,y,z). This allows defining the th power S^n of a set as the set of all -tuples (x_1, \ldots, x_n) of elements of . When is endowed with some structure, it is frequent that S^n is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example \R^n (where \R denotes the real numbers) denotes the Cartesian product of copies of \R, as well as their direct product as vector space, topological spaces, ring (mathematics), rings, etc.


Sets as exponents

A -tuple (x_1, \ldots, x_n) of elements of can be considered as a function (mathematics), function from \. This generalizes to the following notation. Given two sets and , the set of all functions from to is denoted S^T. This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying): : (S^T)^U\cong S^, : S^\cong S^T\times S^U, where \times denotes the Cartesian product, and \sqcup the disjoint union. One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or module (mathematics), modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example, \R^\N denotes the vector space of the infinite sequences of real numbers, and \R^ the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis (linear algebra), basis consisting of the sequences with exactly one nonzero element that equals , while the Hamel basis, Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma). In this context, can represents the set \. So, 2^S denotes the power set of , that is the set of the functions from to \, which can be identified with the set of the subsets of , by mapping each function to the inverse image of . This fits in with the Cardinal exponentiation, exponentiation of cardinal numbers, in the sense that , where is the cardinality of .


In category theory

In the category of sets, the morphisms between sets and are the functions from to . It results that the set of the functions from to that is denoted Y^X in the preceding section can also be denoted \hom(X,Y). The isomorphism (S^T)^U\cong S^ can be rewritten :\hom(U,S^T)\cong \hom(T\times U,S). This means the functor "exponentiation to the power " is a right adjoint to the functor "direct product with ". This generalizes to the definition of exponential (category theory), exponentiation in a category in which finite direct products exist: in such a category, the functor X\to X^T is, if it exists, a right adjoint to the functor Y\to T\times Y. A category is called a ''Cartesian closed category'', if direct products exist, and the functor Y\to X\times Y has a right adjoint for every .


Repeated exponentiation

Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at , the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and () respectively.


Limits of powers

Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function has no limit at the point . One may consider at what points this function does have a limit. More precisely, consider the function f(x,y) = x^y defined on D = \. Then can be viewed as a subset of (that is, the set of all pairs with , belonging to the extended real number line , endowed with the product topology), which will contain the points at which the function has a limit. In fact, has a limit at all accumulation points of , except for , , and . Accordingly, this allows one to define the powers by continuity whenever , , except for , , and , which remain indeterminate forms. Under this definition by continuity, we obtain: * and , when . * and , when . * and , when . * and , when . These powers are obtained by taking limits of for ''positive'' values of . This method does not permit a definition of when , since pairs with are not accumulation points of . On the other hand, when is an integer, the power is already meaningful for all values of , including negative ones. This may make the definition obtained above for negative problematic when is odd, since in this case as tends to through positive values, but not negative ones.


Efficient computation with integer exponents

Computing using iterated multiplication requires multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute , apply Horner's rule to the exponent 100 written in binary: : 100 = 2^2 +2^5 + 2^6 = 2^2(1+2^3(1+2)). Then compute the following terms in order, reading Horner's rule from right to left. This series of steps only requires 8 multiplications instead of 99. In general, the number of multiplication operations required to compute can be reduced to \sharp n +\lfloor \log_ n\rfloor -1, by using exponentiation by squaring, where \sharp n denotes the number of s in the binary representation of . For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the ''minimal'' sequence of multiplications (the minimal-length addition chain for the exponent) for is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement.


Iterated functions

Function composition is a binary operation that is defined on function (mathematics), functions such that the codomain of the function written on the right is included in the domain of a function, domain of the function written on the left. It is denoted g\circ f, and defined as : (g\circ f)(x)=g(f(x)) for every in the domain of . If the domain of a function equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the th power of the function under composition, commonly called the ''th iterate'' of the function. Thus f^n denotes generally the th iterate of ; for example, f^3(x) means f(f(f(x))). When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration ''before'' the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication ''after'' the parentheses. Thus f^2(x)= f(f(x)), and f(x)^2= f(x)\cdot f(x). When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example f^=f\circ f \circ f, and f^3=f\cdot f\cdot f. For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So, \sin^2 x and \sin^2(x) both mean \sin(x)\cdot\sin(x) and not \sin(\sin(x)), which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors. In this context, the exponent -1 denotes always the inverse function, if it exists. So \sin^x=\sin^(x) = \arcsin x. For the multiplicative inverse fractions are generally used as in 1/\sin(x)=\frac 1.


In programming languages

Programming languages generally express exponentiation either as an infix operator (computer programming), operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (^). The ASCII#1963, original version of ASCII included an uparrow symbol (), intended for exponentiation, but this was caret#History, replaced by the caret in 1967, so the caret became usual in programming languages. The notations include: * x ^ y: AWK, BASIC, J programming language, J, MATLAB, Wolfram Language (Wolfram Mathematica, Mathematica), R (programming language), R, Microsoft Excel, Analytica (software), Analytica, TeX (and its derivatives), TI-BASIC, bc programming language, bc (for integer exponents), Haskell (programming language), Haskell (for nonnegative integer exponents), Lua (programming language), Lua, and most computer algebra systems. * x ** y. The Fortran character set did not include lowercase characters or punctuation symbols other than +-*/()&=.,' and so used ** for exponentiation (the initial version used a xx b instead.). Many other languages followed suit: Ada (programming language), Ada, Z shell, KornShell, Bash (Unix shell), Bash, COBOL, CoffeeScript, Fortran, FoxPro 2, FoxPro, Gnuplot, Apache Groovy, Groovy, JavaScript, OCaml, Object REXX, ooRexx, F Sharp (programming language), F#, Perl, PHP, PL/I, Python (programming language), Python, Rexx, Ruby (programming language), Ruby, SAS programming language, SAS, Seed7, Tcl, ABAP, Mercury (programming language), Mercury, Haskell (for floating-point exponents), Turing (programming language), Turing, and VHDL. * x ↑ y: Algol programming language, Algol Reference language, Commodore BASIC, TRS-80 Level II BASIC, TRS-80 Level II/III BASIC. * x ^^ y: Haskell (for fractional base, integer exponents), D (programming language), D. * x⋆y: APL (programming language), APL. In most programming languages with an infix exponentiation operator, it is operator associativity, right-associative, that is, a^b^c is interpreted as a^(b^c). This is because (a^b)^c is equal to a^(b*c) and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Office Excel, Microsoft Excel formula language. Other programming languages use functional notation: * (expt x y): Common Lisp. * pown x y: F Sharp (programming language), F# (for integer base, integer exponent). Still others only provide exponentiation as part of standard library (computing), libraries: * pow(x, y): C (programming language), C, C++ (in math library). * Math.Pow(x, y): C Sharp (programming language), C#. * math:pow(X, Y): Erlang (programming language), Erlang. * Math.pow(x, y): Java (programming language), Java. * [Math]::Pow(x, y): PowerShell. In some Type system, statically typed languages that prioritize type safety such as Rust (programming language), Rust, exponentiation is performed via a multitude of methods: * x.pow(y) for x and y as integers * x.powf(y) for x and y as floating-point numbers * x.powi(y) for x as a float and y as an integer


See also

* * * * * * * * *


Notes


References

{{Authority control Exponentials Binary operations Unary operations