Harris Chain
   HOME

TheInfoList



OR:

In the mathematical study of stochastic processes, a Harris chain is a
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general (possibly uncountably infinite) state spaces.


Definition

Let \ be a
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
on a general state space \Omega with stochastic kernel K. The kernel represents a generalized one-step transition probability law, so that P(X_\in C\mid X_n=x)=K(x,C) for all states x in \Omega and all measurable sets C\subseteq \Omega. The chain \ is a ''Harris chain''R. Durrett. ''Probability: Theory and Examples''. Thomson, 2005. . if there exists A\subseteq\Omega,\varepsilon>0, and
probability measure In mathematics, a probability measure is a real-valued function defined on a set of events in a σ-algebra that satisfies Measure (mathematics), measure properties such as ''countable additivity''. The difference between a probability measure an ...
\rho with \rho(\Omega)=1 such that # If \tau_A:=\inf \, then P(\tau_A<\infty\mid X_0=x)=1 for all x\in\Omega. # If x\in A and C\subseteq\Omega (where C is measurable), then K(x, C)\geq \varepsilon\rho(C). The first part of the definition ensures that the chain returns to some state within A with probability 1, regardless of where it starts. It follows that it visits state A infinitely often (with probability 1). The second part implies that once the Markov chain is in state A, its next-state can be generated with the help of an independent Bernoulli coin flip. To see this, first note that the parameter \varepsilon must be between 0 and 1 (this can be shown by applying the second part of the definition to the set C=\Omega). Now let x be a point in A and suppose X_n=x. To choose the next state X_, independently flip a biased coin with success probability \varepsilon. If the coin flip is successful, choose the next state X_\in\Omega according to the probability measure \rho. Else (and if \varepsilon<1), choose the next state X_ according to the measure P(X_\in C\mid X_n=x)=(K(x,C)-\varepsilon\rho(C))/(1-\varepsilon) (defined for all measurable subsets C\subseteq \Omega). Two random processes \ and \ that have the same probability law and are Harris chains according to the above definition can be coupled as follows: Suppose that X_n=x and Y_n=y, where x and y are points in A. Using the same coin flip to decide the next-state of both processes, it follows that the next states are the same with probability at least \varepsilon.


Examples


Example 1: Countable state space

Let Ω be a countable state space. The kernel ''K'' is defined by the one-step conditional transition probabilities P ''X''''n'' = ''x''for ''x'',''y'' ∈ Ω. The measure ρ is a probability mass function on the states, so that ''ρ''(''x'') ≥ 0 for all ''x'' ∈ Ω, and the sum of the ''ρ''(x) probabilities is equal to one. Suppose the above definition is satisfied for a given set ''A'' ⊆ Ω and a given parameter ε > 0. Then P ''X''''n'' = ''x''≥ ''ερ''(''c'') for all ''x'' ∈ ''A'' and all ''c'' ∈ Ω.


Example 2: Chains with continuous densities

Let , ''X''''n'' ∈ R''d'' be a
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
with a kernel that is absolutely continuous with respect to
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
: : ''K''(''x'', ''dy'') = ''K''(''x'', ''y'') ''dy'' such that ''K''(''x'', ''y'') is a continuous function. Pick (''x''0, ''y''0) such that ''K''(''x''0, ''y''0 ) > 0, and let ''A'' and Ω be open sets containing ''x''0 and ''y''0 respectively that are sufficiently small so that ''K''(''x'', ''y'') ≥ ''ε'' > 0 on ''A'' ×  Ω. Letting ''ρ''(''C'') = , Ω ∩ ''C'', /, Ω, where , Ω, is the
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
of Ω, we have that (2) in the above definition holds. If (1) holds, then is a Harris chain.


Reducibility and periodicity

In the following R:=\inf\; i.e. R is the first time after time 0 that the process enters region A. Let \mu denote the initial distribution of the Markov chain, i.e. X_0\sim\mu. Definition: If for all \mu, P X_0 \in A 1, then the Harris chain is called ''recurrent.'' Definition: A recurrent Harris chain X_n is ''aperiodic'' if \exists N, such that \forall n\ge N, \forall \mu, P X_0 \in A0 Theorem: Let X_n be an aperiodic recurrent Harris chain with stationary distribution \pi. If P X_0 \in A 1 then as n\rightarrow\infty, , , \mathcal(X_n, X_0)-\pi, , _\rightarrow 0 where , , \cdot, , _{TV} denotes the total variation for signed measures defined on the same measurable space.


References

Markov processes