A continuous-time Markov chain (CTMC) is a continuous
stochastic process in which, for each state, the process will change state according to an
exponential random variable
In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant averag ...
and then move to a different state as specified by the probabilities of a
stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.
An example of a CTMC with three states
is as follows: the process makes a transition after the amount of time specified by the holding time—an exponential random variable
, where ''i'' is its current state. Each random variable is independent and such that
,
and
. When a transition is to be made, the process moves according to the jump chain, a
discrete-time Markov chain with stochastic matrix:
:
Equivalently, by the property of
competing exponentials
In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant averag ...
, this CTMC changes state from state ''i'' according to the minimum of two random variables, which are independent and such that
for
where the parameters are given by the Q-matrix
:
Each non-diagonal entry
can be computed as the probability that the jump chain moves from state ''i'' to state ''j'', divided by the expected holding time of state ''i''. The diagonal entries are chosen so that each row sums to 0.
A CTMC satisfies the
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov prop ...
, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains.
Definition
Let
be a probability space, let
be a countable nonempty set, and let
(
for "time"). Equip
with the
discrete metric, so that we can make sense of
right continuity of functions
. A continuous-time Markov chain is defined by:
* A
probability vector on
(which below we will interpret as the ''initial distribution'' of the Markov chain), and
* A
rate matrix on
, that is, a function
such that
# for all distinct
,
# for all
(Even if
is infinite, this sum is ''a priori'' well defined (possibly equalling
) because each term appearing in the sum is nonnegative. ''A posteriori'', we know the sum must also be finite (not equal to
), since we're assuming it's equal to
and we've assumed
is real valued. Some authors instead use a definition that's word-for-word the same except for a modified stipulation
, and say
is ''stable'' or ''totally stable'' to mean
, i.e., every entry is real valued.)
Note that the row sums of
are 0:
or more succinctly,
. This situation contrasts with the situation for
discrete-time Markov chains, where all row sums of the transition matrix equal unity.
Now, let
such that
is
-measurable. There are three equivalent ways to define
being ''Markov with initial distribution
and rate matrix
'': via transition probabilities or via the jump chain and holding times.
As a prelude to a transition-probability definition, we first motivate the definition of a ''regular''
rate matrix. We will use the transition rate matrix
to specify the dynamics of the Markov chain by means of generating a collection of ''transition matrices''
on
(
), via the following theorem.
We say
is ''regular'' to mean that we do have uniqueness for the above system, i.e., that there exists exactly one solution. We say
is ''irregular'' to mean
is not regular. If
is finite, then there is exactly one solution, namely
and hence
is regular. Otherwise,
is infinite, and there exist irregular transition rate matrices on
. If
is regular, then for the unique solution
, for each
,
will be a
stochastic matrix. We will assume
is regular from the beginning of the following subsection up through the end of this section, even though it is conventional to not include this assumption. (Note for the expert: thus we are not defining continuous-time Markov chains in general but only ''non-explosive'' continuous-time Markov chains.)
Transition-probability definition
Let
be the (unique) solution of the system (). (Uniqueness guaranteed by our assumption that
is regular.) We say
is ''Markov with initial distribution
and rate matrix
'' to mean: for any nonnegative integer
, for all
such that
right continuous
In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
(with respect to the
discrete metric on
S): there exists a
\Pr-null set
N such that
\\subseteq N.
Jump-chain/holding-time definition
Sequences associated to a right-continuous function
Let
f:T\to S be right continuous (when we equip
S with the
discrete metric). Define
:
h=h(f)=(\inf\)_)\cup\,
let
:
H=H(f)=(h^0)_
be the ''holding-time sequence'' associated to
f, choose
s\in S, and let
:
y=y(f)=\left(\beginf(\sum_H_k)&\text\sum_H_k<+\infty,\\s&\text\end\right)_
be "the ''state sequence''" associated to
f.
Definition of the jump matrix Π
The jump matrix
\Pi, alternatively written
\Pi(Q) if we wish to emphasize the dependence on
Q, is the matrix
\Pi=( =j_\cup\bigcup_(\\cup\),
where
Z=Z(Q)=\ is the ''zero set'' of the function
(Q_)_.
Jump-chain/holding-time property
We say
X is ''Markov with initial distribution
\lambda and rate matrix
Q'' to mean: the trajectories of
X are almost surely right continuous, let
f be a modification of
X to have (everywhere) right-continuous trajectories,
\sum_H(f(\omega))_n=+\infty almost surely (note to experts: this condition says
X is non-explosive), the state sequence
y(f(\omega)) is a discrete-time Markov chain with initial distribution
\lambda (jump-chain property) and transition matrix
\Pi(Q), and
\forall n\in\mathbb Z_~\forall B\in(\mathbb R_)~\Pr(H_n(f)\in B)=\operatorname(-Q_)(B) (holding-time property).
Infinitesimal definition

We say
X is ''Markov with initial distribution
\lambda and rate matrix
Q'' to mean: for all
i\in S, \Pr(X(0)=i)=\lambda_i and for all
i,j, for all
t and for small strictly positive values of
h, the following holds for all
t\in T such that
0<\Pr(X(t)=i):
:
\Pr(X(t+h) = j \mid X(t) = i) = =j+ Q_h + o(h),
where the
little-o term o(h) depends in a certain way on
i,j,h.
The above equation shows that
q_ can be seen as measuring how quickly the transition from
i to
j happens for
i\neq j, and how quickly the transition away from
i happens for
i= j.
Properties
Communicating classes
Communicating classes, transience, recurrence and positive and null recurrence are defined identically as for
discrete-time Markov chains.
Transient behaviour
Write P(''t'') for the matrix with entries ''p''
''ij'' = P(''X''
''t'' = ''j'' , ''X''
0 = ''i''). Then the matrix P(''t'') satisfies the forward equation, a
first-order differential equation
:
P'(t) = P(t) Q
where the prime denotes differentiation with respect to ''t''. The solution to this equation is given by a
matrix exponential
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential giv ...
:
P(t) = e^
In a simple case such as a CTMC on the state space . The general ''Q'' matrix for such a process is the following 2 × 2 matrix with ''α'',''β'' > 0
:
Q = \begin -\alpha & \alpha \\ \beta & -\beta \end.
The above relation for forward matrix can be solved explicitly in this case to give
:
P(t) = \begin
\frac + \frace^ &
\frac - \frace^ \\
\frac - \frace^ &
\frac + \frace^
\end
However, direct solutions are complicated to compute for larger matrices. The fact that ''Q'' is the generator for a
semigroup
In mathematics, a semigroup is an algebraic structure consisting of a Set (mathematics), set together with an associative internal binary operation on it.
The binary operation of a semigroup is most often denoted multiplication, multiplicatively ...
of matrices
:
P(t+s) = e^ = e^ e^ = P(t) P(s)
is used.
Stationary distribution
The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of ''t''. Observe that for the two-state process considered earlier with P(''t'') given by
:
P(t) = \begin
\frac + \frace^ &
\frac - \frace^ \\
\frac - \frace^ &
\frac + \frace^
\end
as ''t'' → ∞ the distribution tends to
:
P_\pi = \begin
\frac &
\frac \\
\frac &
\frac
\end
Observe that each row has the same distribution as this does not depend on starting state. The row vector ' may be found by solving
:
\pi Q = 0.
with the additional constraint that
:
\sum_ \pi_i = 1.
Example 1

The image to the right describes a continuous-time Markov chain with state-space and
transition rate matrix
::
Q=\begin
-0.025 & 0.02 & 0.005 \\
0.3 & -0.5 & 0.2 \\
0.02 & 0.4 & -0.42
\end.
The stationary distribution of this chain can be found by solving
\pi Q=0, subject to the constraint that elements must sum to 1 to obtain
:
\pi = \begin0.885 & 0.071 & 0.044 \end.
Example 2

The image to the right describes a discrete-time Markov chain modeling
Pac-Man
originally called ''Puck Man'' in Japan, is a 1980 maze video game, maze action game, action video game developed and released by Namco for Arcade game, arcades. In North America, the game was released by Midway Manufacturing as part of its l ...
with state-space . The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the monsters move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition rate matrix:
Q=\begin
-1 & \frac & & \frac\\
\frac & -1 & \frac & & \frac&&&\frac\\
& \frac & -1 & & & \frac\\
\frac & & & -1 & \frac & & \frac\\
& \frac & & \frac & -1 & \frac & & \frac\\
& & \frac & & \frac & -1& & & \frac\\
& & & \frac & & & -1 & \frac\\
& \frac & && \frac & & \frac & -1 & \frac\\
& & & & & \frac & & \frac & -1\end
This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the monsters can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solving
\pi Q=0, subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint is
\pi=(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\%.
The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least.
Time reversal
For a CTMC ''X''
''t'', the time-reversed process is defined to be
\hat X_t = X_. By
Kelly's lemma this process has the same stationary distribution as the forward process.
A chain is said to be reversible if the reversed process is the same as the forward process.
Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
Embedded Markov chain
One method of finding the
stationary probability distribution, , of an
ergodic
In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies t ...
continuous-time Markov chain, ''Q'', is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC, ''S'', is denoted by ''s''
''ij'', and represents the
conditional probability
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
of transitioning from state ''i'' into state ''j''. These conditional probabilities may be found by
:
s_ = \begin
\frac & \text i \neq j \\
0 & \text.
\end
From this, ''S'' may be written as
:
S = I - \left( \operatorname(Q) \right)^ Q
where ''I'' is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial ...
and diag(''Q'') is the
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal ...
formed by selecting the
main diagonal
In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
from the matrix ''Q'' and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find
\varphi such that
:
\varphi S = \varphi,
with
\varphi being a row vector, such that all elements in
\varphi are greater than 0 and
\, \varphi\, _1 = 1. From this, may be found as
:
\pi = .
(''S'' may be periodic, even if ''Q'' is not. Once is found, it must be normalized to a
unit vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat").
The term ''direction ve ...
.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing ''X''(''t'') at intervals of δ units of time. The random variables ''X''(0), ''X''(δ), ''X''(2δ), ... give the sequence of states visited by the δ-skeleton.
See also
*
Kolmogorov equations (Markov jump process)
In mathematics and statistics, in the context of Markov processes, the Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, are a pair of systems of differential equations that describe the time evolu ...
Notes
References
*
* Leo Breiman (1992)
968''Probability''. Original edition published by Addison-Wesley; reprinted by
Society for Industrial and Applied Mathematics
Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to applied mathematics, computational science, and data science through research, publications, and community. SIAM is the world's largest scientific soci ...
. (See Chapter 7)
*
*
*
J. L. Doob (1953) ''Stochastic Processes''. New York: John Wiley and Sons .
* A. A. Markov (1971). "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. ''Dynamic Probabilistic Systems, volume 1: Markov Chains''. John Wiley and Sons.
*
* S. P. Meyn and R. L. Tweedie (1993) ''Markov Chains and Stochastic Stability''. London: Springer-Verlag . online
MCSS. Second edition to appear, Cambridge University Press, 2009.
* Classical text. cf Chapter 6 ''Finite Markov Chains'' pp. 384ff.
*
John G. Kemeny &
J. Laurie Snell (1960) ''Finite Markov Chains'', D. van Nostrand Company
* E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge University Press, 1984, 2004.
* Seneta, E. ''Non-negative matrices and Markov chains''. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973)
*
{{Queueing theory
Markov processes