Section 10

Inequalities and Markov Chains (BH Chapter 10 & 11)

Statistical Inequalities

Markov Chains

A Markov Chain is a walk along a discrete state space {1, 2, . . . , M}. We let XtX_t denote which element of the state space the walk is on at time t. The Markov Chain is the set of random variables denoting where the walk is at all points in time, {X0, X1, X2, . . . }. Each XtX_t takes on values that are in the state space, so if X1 = 3, then at time 1, we are at state 3.

What makes such a sequence of random variables a Markov Chain is the Markov Property, which says that if you want to predict where the chain is at at a future time, you only need to use the present state, and not any past information. In other words, the given the present, the future and past are conditionally independent.

Mathematically, this says:

P(Xn+1=jX0=i0,X1=i1,,Xn=in)=P(Xn+1=jXn=in)P(X_{n+1} = j | X_0 = i_0, X_1 = i_1, \dots, X_n = i_n) = P(X_{n+1} = j | X_n = i_n)

In words: Given that my history of states has been i0,i2ini_0, i_2 \ldots i_n, the distribution of where my next state will be doesn’t depend on any of that history besides ini_n, the most recent state.

State Properties

A state is either recurrent or transient.

  • If you start at a Recurrent State, then you will always return back to that state at some point in the

    future. You can leave, but you’ll always return at some point.

  • Otherwise you are at a Transient State. There is some probability that once you leave you will never return. There’s a chance that you’ll leave and never come back

A state is either periodic or aperiodic.

  • If you start at a Periodic State of period k, then the GCD of all of the possible number steps it would take to return back is k (which should be > 1).

  • Otherwise you are at an Aperiodic State. The GCD of all of the possible number of steps it would take to return back is 1.

Transition Matrix

Element qijq_{ij} in square transition matrix Q is the probability that the chain goes from state ii to state jj, or more formally:

qij=P(Xn+1=jXn=i)q_{ij} = P(X_{n+1} = j | X_n = i)

To find the probability that the chain goes from state ii to state jj in mm steps, take the (i, j)-th} element of QmQ^m.

qij(m)=P(Xn+m=jXn=i)q^{(m)}_{ij} = P(X_{n+m} = j | X_n = i)

If X0X_0 is distributed according to row-vector PMF p\vec{p} (e.g. pj=P(X0=ij)p_j = P(X_0 = i_j)), then the marginal PMF of XnX_n is pQn\vec{p}Q^n.

Chain Properties

A chain is irreducible if you can get from anywhere to anywhere. An irreducible chain must have all of its states recurrent. A chain is periodic if any of its states are periodic, and is aperiodic if none of its states are periodic. In an irreducible chain, all states have the same period.

A chain is reversible with respect to s\vec{s} if siqij=sjqjis_iq_{ij} = s_jq_{ji} for all i,ji, j. A reversible chain running on s\vec{s} is indistinguishable whether it is running forwards in time or backwards in time. Examples of reversible chains include random walks on undirected networks, or any chain with qij=qjiq_{ij} = q_{ji}, where the Markov chain would be stationary with respect to s=(1M,1M,,1M)\vec{s} = (\frac{1}{M}, \frac{1}{M}, \dots, \frac{1}{M}).

Reversibility Condition Implies Stationarity - If you have a PMF s\vec{s} on a Markov chain with transition matrix QQ, then siqij=sjqjis_iq_{ij} = s_jq_{ji} for all i,ji, j implies that ss is stationary.

Stationary Distribution

Let us say that the vector p=(p1,p2,,pM)\vec{p} = (p_1, p_2, \dots, p_M) is a possible and valid PMF of where the Markov Chain is at at a certain time. We will call this vector the stationary distribution, s\vec{s}, if it satisfies sQ=s\vec{s}Q = \vec{s}.

As a consequence, if XtX_t has the stationary distribution, then all future Xt+1,Xt+2,X_{t+1}, X_{t + 2}, \dots also has the stationary distribution.

  • If a Markov Chain is irreducible, then it has a unique stationary distribution. In addition, all entries of this stationary distribution are non-zero (which could have been inferred from the fact that all states are recurrent).

    • Example: In the Gambler's Ruin problem, which is not irreducible, what ultimately happens to the chain can either be that one's money is always 00 or always NN.

  • If a Markov Chain is irreducible and aperiodic, then it has a unique stationary distribution s\vec{s} and the chain converges to the stationary distribution. limnP(Xn=i)=si\lim_{n \to \infty} P(X_n = i) = \vec{s}_i

    • Example: Imagine a Markov chain which is just a cycle, and hence is periodic. Then, depending on where we start, P(Xn=i)P(X_n = i) will be either 0 or 1 deterministically, and surely won't converge to the stationary distribution, which is uniform across all nodes in the cycle.

  • If a Markov Chain is irreducible and aperiodic and the stationary distribution exists and is unique, then the expected number of steps to return back to ii starting from ii is 1/si1/s_i, where sis_i is the long-run probability of a chain being at state ii. To solve for the stationary distribution, you can solve for sQ=s\vec{s}Q = \vec{s} or (QTI)sT=0(Q^T - I)\vec{s}^T = 0. The stationary distribution is uniform if the columns of QQ sum to 1.

Random Walk on Undirected Network

If you have a certain number of nodes with undirected edges between them, and a chain can pick any edge uniformly at random and move to another node, then this is a random walk on an undirected network. The stationary distribution can be easily calculated. Let did_i be the degree of the i-th node, meaning the number of edges connected to this node. Then, we have:

si=diidi\vec{s}_i = \frac{d_i}{\sum_i d_i}

Last updated

Was this helpful?