This article contains examples of
Markov chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happen ...
s and Markov processes in action.
All examples are in the countable
state space
A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory.
For instance, the t ...
. For an overview of Markov chains in general state space, see
Markov chains on a measurable state space A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.
History
The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used ...
.
Discrete-time
Board games played with dice
A game of
snakes and ladders or any other game whose moves are determined entirely by
dice
Dice (singular die or dice) are small, throwable objects with marked sides that can rest in multiple positions. They are used for generating random values, commonly as part of tabletop games, including dice games, board games, role-playing ...
is a Markov chain, indeed, an
absorbing Markov chain
In the mathematical theory of probability, an absorbing Markov chain is a Markov chain in which every state can reach an absorbing state. An absorbing state is a state that, once entered, cannot be left.
Like general Markov chains, there can be co ...
. This is in contrast to card games such as
blackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn't depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states.
Random walk Markov chains
A center-biased random walk
Consider a
random walk
In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space.
An elementary example of a random walk is the random walk on the integer number line \mathbb ...
on the number line where, at each step, the position (call it ''x'') may change by +1 (to the right) or −1 (to the left) with probabilities:
:
:
(where ''c'' is a constant greater than 0)
For example, if the constant, ''c'', equals 1, the probabilities of a move to the left at positions ''x'' = −2,−1,0,1,2 are given by
respectively. The random walk has a centering effect that weakens as ''c'' increases.
Since the probabilities depend only on the current position (value of ''x'') and not on any prior positions, this biased random walk satisfies the definition of a Markov chain.
Gambling
Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If
represents the number of dollars you have after ''n'' tosses, with
, then the sequence
is a Markov process. If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases the
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov prop ...
, the memoryless property of a stochastic process.
A simple weather model
The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the preceding day,
can be represented by a
transition matrix:
:
The matrix ''P'' represents the weather model in which a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day.
The columns can be labelled "sunny" and "rainy", and the rows can be labelled in the same order.

(''P'')
''i j'' is the probability that, if a given day is of type ''i'', it will be
followed by a day of type ''j''.
Notice that the rows of ''P'' sum to 1: this is because ''P'' is a
stochastic matrix
In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, ...
.
Predicting the weather
The weather on day 0 (today) is known to be sunny. This is represented by an initial state vector in which the "sunny" entry is 100%, and the "rainy" entry is 0%:
:
The weather on day 1 (tomorrow) can be predicted by multiplying the state vector from day 0 by the transition matrix:
:
Thus, there is a 90% chance that day 1 will also be sunny.
The weather on day 2 (the day after tomorrow) can be predicted in the same way, from the state vector we computed for day 1:
:
or
:
General rules for day ''n'' are:
:
:
Steady state of the weather
In this example, predictions for the weather on more distant days change less and less on each subsequent day and tend towards
steady state vector This vector represents the probabilities of sunny and rainy weather on all days, and is independent of the initial weather.
The steady state vector is defined as:
:
but converges to a strictly positive vector only if ''P'' is a regular transition matrix (that is, there
is at least one ''P''
''n'' with all non-zero entries).
Since q is independent from initial conditions, it must be unchanged when transformed by ''P''.
This makes it an
eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
(with
eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denot ...
1), and means it can be derived from ''P''.
In layman's terms, the steady-state vector is the vector that, when we multiply it by ''P'', we get the exact same vector back.
For the weather example, we can use this to set up a matrix equation:
:
and since they are a probability vector we know that
:
Solving this pair of simultaneous equations gives the steady state vector:
:
In conclusion, in the long term about 83.3% of days are sunny. It is important to realize that not all Markov processes have a steady state vector. In particular, the transition matrix must be regular. Otherwise, the state vectors will oscillate over time without converging.
Stock market

A
state diagram
A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, ...
for a simple example is shown in the figure on the right, using a directed graph to picture the
state transitions. The states represent whether a hypothetical stock market is exhibiting a
bull market
A market trend is a perceived tendency of financial markets to move in a particular direction over time. Analysts classify these trends as ''secular'' for long time-frames, ''primary'' for medium time-frames, and ''secondary'' for short time-fram ...
,
bear market
A market trend is a perceived tendency of financial markets to move in a particular direction over time. Analysts classify these trends as ''secular'' for long time-frames, ''primary'' for medium time-frames, and ''secondary'' for short time-fram ...
, or stagnant market trend during a given week. According to the figure, a bull week is followed by another bull week 90% of the time, a bear week 7.5% of the time, and a stagnant week the other 2.5% of the time. Labeling the state space the transition matrix for this example is
:
The distribution over states can be written as a
stochastic row vector with the relation . So if at time the system is in state , then three time periods later, at time the distribution is
:

In particular, if at time the system is in state 2 (bear), then at time the distribution is
:

Using the transition matrix it is possible to calculate, for example, the long-term fraction of weeks during which the market is stagnant, or the average number of weeks it will take to go from a stagnant to a bull market. Using the transition probabilities, the steady-state probabilities indicate that 62.5% of weeks will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant, since:
:
A thorough development and many examples can be found in the on-line monograph Meyn & Tweedie 2005.
[S. P. Meyn and R.L. Tweedie, 2005]
Markov Chains and Stochastic Stability
A
finite-state machine
A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number ...
can be used as a representation of a Markov chain. Assuming a sequence of
independent and identically distributed
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usua ...
input signals (for example, symbols from a binary alphabet chosen by coin tosses), if the machine is in state ''y'' at time ''n'', then the probability that it moves to state ''x'' at time ''n'' + 1 depends only on the current state.
Continuous-time
A birth–death process
If one pops one hundred kernels of popcorn in an oven, each kernel popping at an independent
exponentially-distributed time, then this would be a
continuous-time Markov process
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a ...
. If
denotes the number of kernels which have popped up to time ''t'', the problem can be defined as finding the number of kernels that will pop in some later time. The only thing one needs to know is the number of kernels that have popped prior to the time "t". It is not necessary to know ''when'' they popped, so knowing
for previous times "t" is not relevant.
The process described here is an approximation of a
Poisson point process
In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one ...
– Poisson processes are also Markov processes.
See also
*
Mark V. Shaney
*
Interacting particle system
In probability theory, an interacting particle system (IPS) is a stochastic process (X(t))_ on some configuration space \Omega= S^G given by a site space, a countable-infinite graph G and a local state space, a compact metric space S . Mor ...
*
Stochastic cellular automata
References
{{refimprove, date=June 2016
External links
Monopoly as a Markov chain
Markov models
Markov chains
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happe ...