Section 10

Inequalities and Markov Chains (BH Chapter 10 & 11)

Statistical Inequalities

Markov Chains

What makes such a sequence of random variables a Markov Chain is the Markov Property, which says that if you want to predict where the chain is at at a future time, you only need to use the present state, and not any past information. In other words, the given the present, the future and past are conditionally independent.

Mathematically, this says:

State Properties

A state is either recurrent or transient.

  • If you start at a Recurrent State, then you will always return back to that state at some point in the

    future. You can leave, but you’ll always return at some point.

  • Otherwise you are at a Transient State. There is some probability that once you leave you will never return. There’s a chance that you’ll leave and never come back

A state is either periodic or aperiodic.

  • If you start at a Periodic State of period k, then the GCD of all of the possible number steps it would take to return back is k (which should be > 1).

  • Otherwise you are at an Aperiodic State. The GCD of all of the possible number of steps it would take to return back is 1.

Transition Matrix

Chain Properties

A chain is irreducible if you can get from anywhere to anywhere. An irreducible chain must have all of its states recurrent. A chain is periodic if any of its states are periodic, and is aperiodic if none of its states are periodic. In an irreducible chain, all states have the same period.

Stationary Distribution

  • If a Markov Chain is irreducible, then it has a unique stationary distribution. In addition, all entries of this stationary distribution are non-zero (which could have been inferred from the fact that all states are recurrent).

Random Walk on Undirected Network

Last updated