HOME

TheInfoList



OR:

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on
random In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual ra ...
events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a
sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
(e.g., the set \) to a
measurable space In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured. Definition Consider a set X and a σ-algebra \mathcal A on X. Then the ...
, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as
measurement error Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a "mistake ...
. However, the
interpretation of probability The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one be ...
is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the ''sample space'') to a
measurable space In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured. Definition Consider a set X and a σ-algebra \mathcal A on X. Then the ...
. This allows consideration of the
pushforward measure In measure theory, a pushforward measure (also known as push forward, push-forward or image measure) is obtained by transferring ("pushing forward") a measure from one measurable space to another using a measurable function. Definition Given meas ...
, which is called the ''distribution'' of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
. It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a discrete set (such as a finite set) or in an interval of
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
s. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a ''random variable'' is taken to be automatically valued in the real numbers, with more general random quantities instead being called ''
random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansi ...
s''. According to
George Mackey George Whitelaw Mackey (February 1, 1916 – March 15, 2006) was an American mathematician known for his contributions to quantum logic, representation theory, and noncommutative geometry. Career Mackey earned his bachelor of arts at Rice Unive ...
,
Pafnuty Chebyshev Pafnuty Lvovich Chebyshev ( rus, Пафну́тий Льво́вич Чебышёв, p=pɐfˈnutʲɪj ˈlʲvovʲɪtɕ tɕɪbɨˈʂof) ( – ) was a Russian mathematician and considered to be the founding father of Russian mathematics. Chebyshe ...
was the first person "to think systematically in terms of random variables".


Definition

A random variable X is a measurable function X \colon \Omega \to E from a sample space \Omega as a set of possible outcomes to a
measurable space In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured. Definition Consider a set X and a σ-algebra \mathcal A on X. Then the ...
E. The technical axiomatic definition requires the sample space \Omega to be a sample space of a probability triple (\Omega, \mathcal, \operatorname) (see the measure-theoretic definition). A random variable is often denoted by capital roman letters such as X, Y, Z, T. The probability that X takes on a value in a measurable set S\subseteq E is written as : \operatorname(X \in S) = \operatorname(\)


Standard case

In many cases, X is
real-valued In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an i ...
, i.e. E = \mathbb. In some contexts, the term
random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansi ...
(see
extensions Extension, extend or extended may refer to: Mathematics Logic or set theory * Axiom of extensionality * Extensible cardinal * Extension (model theory) * Extension (predicate logic), the set of tuples of values that satisfy the predicate * E ...
) is used to denote a random variable not of this form. When the image (or range) of X is
countable In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is ''countable'' if there exists an injective function from it into the natural numbers ...
, the random variable is called a discrete random variable and its distribution is a
discrete probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
, i.e. can be described by a probability mass function that assigns a probability to each value in the image of X. If the image is uncountably infinite (usually an interval) then X is called a continuous random variable. In the special case that it is
absolutely continuous In calculus, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central ope ...
, its distribution can be described by a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, a
mixture distribution In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collectio ...
is one such counterexample; such random variables cannot be described by a probability density or a probability mass function. Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.


Extensions

The term "random variable" in statistics is traditionally limited to the
real-valued In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an i ...
case (E=\mathbb). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of a random variable, its cumulative distribution function, and the moments of its distribution. However, the definition above is valid for any
measurable space In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured. Definition Consider a set X and a σ-algebra \mathcal A on X. Then the ...
E of values. Thus one can consider random elements of other sets E, such as random boolean values, categorical values,
complex numbers In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
,
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
s,
matrices Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
,
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
s,
tree In botany, a tree is a perennial plant with an elongated stem, or trunk, usually supporting branches and leaves. In some usages, the definition of a tree may be narrower, including only woody plants with secondary growth, plants that are ...
s, sets,
shape A shape or figure is a graphical representation of an object or its external boundary, outline, or external surface, as opposed to other properties such as color, texture, or material type. A plane shape or plane figure is constrained to lie ...
s, manifolds, and
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
s. One may then specifically refer to a ''random variable of type E'', or an ''E-valued random variable''. This more general concept of a
random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansi ...
is particularly useful in disciplines such as
graph theory In mathematics, graph theory is the study of ''graphs'', which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of '' vertices'' (also called ''nodes'' or ''points'') which are conn ...
,
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, natural language processing, and other fields in discrete mathematics and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of E, using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space \Omega, which allows the different random variables to covary). For example: *A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are (1 \ 0 \ 0 \ 0 \ \cdots), (0 \ 1 \ 0 \ 0 \ \cdots), (0 \ 0 \ 1 \ 0 \ \cdots) and the position of the 1 indicates the word. *A random sentence of given length N may be represented as a vector of N random words. *A
random graph In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs ...
on N given vertices may be represented as a N \times N matrix of random variables, whose values specify the
adjacency matrix In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simp ...
of the random graph. *A random function F may be represented as a collection of random variables F(x), giving the function's values at the various points x in the function's domain. The F(x) are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a
random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value ...
is a random function of some index set such as 1,2,\ldots, n, and
random field In physics and mathematics, a random field is a random function over an arbitrary domain (usually a multi-dimensional space such as \mathbb^n). That is, it is a function f(x) that takes on a random value at each point x \in \mathbb^n(or some other ...
is a random function on any set (typically time, space, or a discrete set).


Distribution functions

If a random variable X\colon \Omega \to \mathbb defined on the probability space (\Omega, \mathcal, \operatorname) is given, we can ask questions like "How likely is it that the value of X is equal to 2?". This is the same as the probability of the event \\,\! which is often written as P(X = 2)\,\! or p_X(2) for short. Recording all these probabilities of outputs of a random variable X yields the probability distribution of X. The probability distribution "forgets" about the particular probability space used to define X and only records the probabilities of various output values of X. Such a probability distribution, if X is real-valued, can always be captured by its cumulative distribution function :F_X(x) = \operatorname(X \le x) and sometimes also using a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
, f_X. In measure-theoretic terms, we use the random variable X to "push-forward" the measure P on \Omega to a measure p_X on \mathbb. The measure p_X is called the "(probability) distribution of X" or the "law of X". The density f_X = dp_X/d\mu, the Radon–Nikodym derivative of p_X with respect to some reference measure \mu on \mathbb (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the
counting measure In mathematics, specifically measure theory, the counting measure is an intuitive way to put a measure on any set – the "size" of a subset is taken to be the number of elements in the subset if the subset has finitely many elements, and infinity ...
in the case of discrete random variables). The underlying probability space \Omega is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or
independence Independence is a condition of a person, nation, country, or state in which residents and population, or some portion thereof, exercise self-government, and usually sovereignty, over its territory. The opposite of independence is the statu ...
based on a
joint distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
of two or more random variables on the same probability space. In practice, one often disposes of the space \Omega altogether and just puts a measure on \mathbb that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.


Examples


Discrete random variable

In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum \operatorname(0) + \operatorname(2) + \operatorname(4) + \cdots. In examples such as these, the
sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If \, \ are countable sets of real numbers, b_n >0 and \sum_n b_n=1, then F=\sum_n b_n \delta_(x) is a discrete distribution function. Here \delta_t(x) = 0 for x < t, \delta_t(x) = 1 for x \ge t. Taking for instance an enumeration of all rational numbers as \ , one gets a discrete function that is not necessarily a step function (piecewise constant).


Coin toss

The possible outcomes for one coin toss can be described by the sample space \Omega = \. We can introduce a real-valued random variable Y that models a $1 payoff for a successful bet on heads as follows: Y(\omega) = \begin 1, & \text \omega = \text, \\ pt0, & \text \omega = \text. \end If the coin is a
fair coin In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In the ...
, ''Y'' has a probability mass function f_Y given by: f_Y(y) = \begin \tfrac 12,& \texty=1,\\ pt\tfrac 12,& \texty=0, \end


Dice roll

A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers ''n''1 and ''n''2 from (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable ''X'' given by the function that maps the pair to the sum: X((n_1, n_2)) = n_1 + n_2 and (if the dice are fair) has a probability mass function ''f''''X'' given by: f_X(S) = \frac, \text S \in \


Continuous random variable

Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no "
gaps Gaps is a member of the Montana group of Patience games, where the goal is to arrange all the cards in suit from Deuce (a Two card) to King. Other solitaire games in this family include Spaces, Addiction, Vacancies, Clown Solitaire, Paganini, ...
", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables
almost never In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
take an exact prescribed value ''c'' (formally, \forall c \in \mathbb:\; \Pr(X = c) = 0) but there is a positive probability that its value will lie in particular
intervals Interval may refer to: Mathematics and physics * Interval (mathematics), a range of numbers ** Partially ordered set#Intervals, its generalization from numbers to arbitrary partially ordered sets * A statistical level of measurement * Interval e ...
which can be arbitrarily small. Continuous random variables usually admit
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
s (PDF), which characterize their CDF and probability measures; such distributions are also called
absolutely continuous In calculus, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central ope ...
; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, ''X'' = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any ''range'' of values. For example, the probability of choosing a number in [0, 180] is . Instead of speaking of a probability mass function, we say that the probability ''density'' of ''X'' is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given any interval I =
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
= \, a random variable X_I \sim \operatorname(I) = \operatorname
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> is called a " continuous uniform random variable" (CURV) if the probability that it takes a value in a
subinterval In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. For example, the set of numbers satisfying is an interval which contains , , and all numbers in between. Other ...
depends only on the length of the subinterval. This implies that the probability of X_I falling in any subinterval , d\sube
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> is proportional to the length of the subinterval, that is, if , one has \Pr\left( X_I \in ,dright) = \frac where the last equality results from the unitarity axiom of probability. The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
of a CURV X \sim \operatorname
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> is given by the indicator function of its interval of support normalized by the interval's length: f_X(x) = \begin \displaystyle, & a \le x \le b \\ 0, & \text. \endOf particular interest is the uniform distribution on the
unit interval In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted ' (capital letter ). In addition to its role in real analysis ...
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math>. Samples of any desired probability distribution \operatorname can be generated by calculating the quantile function of \operatorname on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.


Mixed type

A mixed random variable is a random variable whose cumulative distribution function is neither
discrete Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory *Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a g ...
nor everywhere-continuous. It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the will be the weighted average of the CDFs of the component variables. An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, ''X'' = −1; otherwise ''X'' = the value of the spinner as in the preceding example. There is a probability of that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see . The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).


Measure-theoretic definition

The most formal,
axiomatic An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or ...
definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the
Banach–Tarski paradox The Banach–Tarski paradox is a theorem in set-theoretic geometry, which states the following: Given a solid ball in three-dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be p ...
) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of
union Union commonly refers to: * Trade union, an organization of workers * Union (set theory), in mathematics, a fundamental operation on sets Union may also refer to: Arts and entertainment Music * Union (band), an American rock group ** ''Un ...
s and/or intersections of such intervals. The measure-theoretic definition is as follows. Let (\Omega, \mathcal, P) be a
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models t ...
and (E, \mathcal) a
measurable space In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured. Definition Consider a set X and a σ-algebra \mathcal A on X. Then the ...
. Then an (E, \mathcal)-valued random variable is a measurable function X\colon \Omega \to E, which means that, for every subset B\in\mathcal, its preimage is \mathcal-measurable; X^(B)\in \mathcal, where X^(B) = \. This definition enables us to measure any subset B\in \mathcal in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of \Omega is a possible outcome, a member of \mathcal is a measurable subset of possible outcomes, the function P gives the probability of each such measurable subset, E represents the set of values that the random variable can take (such as the set of real numbers), and a member of \mathcal is a "well-behaved" (measurable) subset of E (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When E is a
topological space In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called po ...
, then the most common choice for the
σ-algebra In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set ''X'' is a collection Σ of subsets of ''X'' that includes the empty subset, is closed under complement, and is closed under countable unions and countabl ...
\mathcal is the Borel σ-algebra \mathcal(E), which is the σ-algebra generated by the collection of all open sets in E. In such case the (E, \mathcal)-valued random variable is called an E-valued random variable. Moreover, when the space E is the real line \mathbb, then such a real-valued random variable is called simply a random variable.


Real-valued random variables

In this case the observation space is the set of real numbers. Recall, (\Omega, \mathcal, P) is the probability space. For a real observation space, the function X\colon \Omega \rightarrow \mathbb is a real-valued random variable if :\ \in \mathcal \qquad \forall r \in \mathbb. This definition is a special case of the above because the set \ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that \ = X^((-\infty, r]).


Moments

The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted \operatorname /math>, and also called the first moment. In general, \operatorname (X)/math> is not equal to f(\operatorname . Once the "average value" is known, one could then ask how far from this average value the values of X typically are, a question that is answered by the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
and standard deviation of a random variable. \operatorname /math> can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of X. Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables X, find a collection \ of functions such that the expectation values \operatorname
_i(X) I, or i, is the ninth letter and the third vowel letter of the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ''i'' (pronounced ), plural ...
/math> fully characterise the distribution of the random variable X. Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function f(X)=X of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable ''X'' that can take on the
nominal Nominal may refer to: Linguistics and grammar * Nominal (linguistics), one of the parts of speech * Nominal, the adjectival form of "noun", as in "nominal agreement" (= "noun agreement") * Nominal sentence, a sentence without a finite verb * Nou ...
values "red", "blue" or "green", the real-valued function = \text/math> can be constructed; this uses the Iverson bracket, and has the value 1 if X has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined.


Functions of random variables

A new random variable ''Y'' can be defined by applying a real Borel measurable function g\colon \mathbb \rightarrow \mathbb to the outcomes of a
real-valued In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an i ...
random variable X. That is, Y=g(X). The cumulative distribution function of Y is then :F_Y(y) = \operatorname(g(X) \le y). If function g is invertible (i.e., h = g^ exists, where h is g's
inverse function In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by f^ . For a function f\colon X ...
) and is either increasing or decreasing, then the previous relation can be extended to obtain :F_Y(y) = \operatorname(g(X) \le y) = \begin \operatorname(X \le h(y)) = F_X(h(y)), & \text h = g^ \text ,\\ \\ \operatorname(X \ge h(y)) = 1 - F_X(h(y)), & \text h = g^ \text . \end With the same hypotheses of invertibility of g, assuming also
differentiability In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, the relation between the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
s can be found by differentiating both sides of the above expression with respect to y, in order to obtain :f_Y(y) = f_X\bigl(h(y)\bigr) \left, \frac \. If there is no invertibility of g but each y admits at most a countable number of roots (i.e., a finite, or countably infinite, number of x_i such that y = g(x_i)) then the previous relation between the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
s can be generalized with :f_Y(y) = \sum_ f_X(g_^(y)) \left, \frac \ where x_i = g_i^(y), according to the
inverse function theorem In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its ''derivative is continuous and non-zero at ...
. The formulas for densities do not demand g to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable X on \Omega and a Borel measurable function g\colon \mathbb \rightarrow \mathbb, then Y = g(X) is also a random variable on \Omega, since the composition of measurable functions is also measurable. (However, this is not necessarily true if g is
Lebesgue measurable In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides wit ...
.) The same procedure that allowed one to go from a probability space (\Omega, P) to (\mathbb, dF_) can be used to obtain the distribution of Y.


Example 1

Let X be a real-valued, continuous random variable and let Y = X^2. :F_Y(y) = \operatorname(X^2 \le y). If y < 0, then P(X^2 \leq y) = 0, so :F_Y(y) = 0\qquad\hbox\quad y < 0. If y \geq 0, then :\operatorname(X^2 \le y) = \operatorname(, X, \le \sqrt) = \operatorname(-\sqrt \le X \le \sqrt), so :F_Y(y) = F_X(\sqrt) - F_X(-\sqrt)\qquad\hbox\quad y \ge 0.


Example 2

Suppose X is a random variable with a cumulative distribution : F_(x) = P(X \leq x) = \frac where \theta > 0 is a fixed parameter. Consider the random variable Y = \mathrm(1 + e^). Then, : F_(y) = P(Y \leq y) = P(\mathrm(1 + e^) \leq y) = P(X \geq -\mathrm(e^ - 1)).\, The last expression can be calculated in terms of the cumulative distribution of X, so : \begin F_Y(y) & = 1 - F_X(-\log(e^y - 1)) \\ pt& = 1 - \frac \\ pt& = 1 - \frac \\ pt& = 1 - e^. \end which is the cumulative distribution function (CDF) of an exponential distribution.


Example 3

Suppose X is a random variable with a standard normal distribution, whose density is : f_X(x) = \frace^. Consider the random variable Y = X^2. We can find the density using the above formula for a change of variables: :f_Y(y) = \sum_ f_X(g_^(y)) \left, \frac \. In this case the change is not
monotonic In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
, because every value of Y has two corresponding values of X (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., :f_Y(y) = 2f_X(g^(y)) \left, \frac \. The inverse transformation is :x = g^(y) = \sqrt and its derivative is :\frac = \frac . Then, : f_Y(y) = 2\frace^ \frac = \frace^. This is a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
with one
degree of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
.


Example 4

Suppose X is a random variable with a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, whose density is : f_X(x) = \frace^. Consider the random variable Y = X^2. We can find the density using the above formula for a change of variables: :f_Y(y) = \sum_ f_X(g_^(y)) \left, \frac \. In this case the change is not
monotonic In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
, because every value of Y has two corresponding values of X (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: :f_Y(y) = f_X(g_1^(y))\left, \frac \ +f_X(g_2^(y))\left, \frac \. The inverse transformation is :x = g_^(y) = \pm \sqrt and its derivative is :\frac = \pm \frac . Then, : f_Y(y) = \frac \frac (e^+e^) . This is a noncentral chi-squared distribution with one
degree of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
.


Some properties

* The probability distribution of the sum of two independent random variables is the
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
of each of their distributions. * Probability distributions are not a
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under
convex combination In convex geometry and vector algebra, a convex combination is a linear combination of points (which can be vectors, scalars, or more generally points in an affine space) where all coefficients are non-negative and sum to 1. In other w ...
, thus forming a convex subset of the space of functions (or measures).


Equivalence of random variables

There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below.


Equality in distribution

If the sample space is a subset of the real line, random variables ''X'' and ''Y'' are ''equal in distribution'' (denoted X \stackrel Y) if they have the same distribution functions: :\operatorname(X \le x) = \operatorname(Y \le x)\quad\textx. To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal
moment generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
s have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the '' time domain'') to a function of a complex variable s (in the ...
.


Almost sure equality

Two random variables ''X'' and ''Y'' are ''equal
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0 ...
'' (denoted X \; \stackrel \; Y) if, and only if, the probability that they are different is
zero 0 (zero) is a number representing an empty quantity. In place-value notation such as the Hindu–Arabic numeral system, 0 also serves as a placeholder numerical digit, which works by multiplying digits to the left of 0 by the radix, usual ...
: :\operatorname(X \neq Y) = 0. For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: :d_\infty(X,Y)=\operatorname \sup_\omega, X(\omega)-Y(\omega), , where "ess sup" represents the
essential supremum In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for ''all' ...
in the sense of measure theory.


Equality

Finally, the two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their measurable space: :X(\omega)=Y(\omega)\qquad\hbox\omega. This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the
experiment An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into Causality, cause-and-effect by demonstrating what outcome oc ...
is rarely explicitly characterized or even characterizable.


Convergence

A significant theme in mathematical statistics consists of obtaining convergence results for certain
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
s of random variables; for instance the law of large numbers and the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themsel ...
. There are various senses in which a sequence X_n of random variables can converge to a random variable X. These are explained in the article on
convergence of random variables In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to ...
.


See also

*
Aleatoricism Aleatoricism or aleatorism, the noun associated with the adjectival aleatory and aleatoric, is a term popularised by the musical composer Pierre Boulez, but also Witold Lutosławski and Franco Evangelisti, for compositions resulting from "action ...
*
Algebra of random variables The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treat ...
* Event (probability theory) *
Multivariate random variable In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value ...
* Pairwise independent random variables *
Observable variable In physics, an observable is a physical quantity that can be measured. Examples include position and momentum. In systems governed by classical mechanics, it is a real-valued "function" on the set of all possible system states. In quantum phys ...
*
Random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansi ...
* Random function *
Random measure In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes. ...
*
Random number generator Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols that cannot be reasonably predicted better than by random chance is generated. This means that the particular outc ...
produces a random value *
Random variate In probability and statistics, a random variate or simply variate is a particular outcome of a ''random variable'': the random variates which are other outcomes of the same random variable might have different values ( random numbers). A random ...
*
Random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value ...
*
Randomness In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual rand ...
* Stochastic process * Relationships among probability distributions


References


Inline citations


Literature

* * * * *


External links

* * * {{DEFAULTSORT:Random Variable Statistical randomness