STAT110: Fall 2020
  • Welcome!
  • FAQ
  • Resources
  • Section 1
    • Questions
  • Section 2
    • Questions
  • Section 3
    • Questions
  • Section 4
    • Questions
  • Section 5
    • Questions
  • Section 6
    • Questions
  • Section 7
    • Questions
  • Section 8
    • Questions
  • Section 9
    • Questions
  • Section 10
    • Questions
Powered by GitBook
On this page
  • Statistical Inequalities
  • Markov Chains
  • State Properties
  • Transition Matrix
  • Chain Properties
  • Stationary Distribution

Was this helpful?

Section 10

Inequalities and Markov Chains (BH Chapter 10 & 11)

Statistical Inequalities

Markov Chains

A Markov Chain is a walk along a discrete state space {1, 2, . . . , M}. We let XtX_tXt​ denote which element of the state space the walk is on at time t. The Markov Chain is the set of random variables denoting where the walk is at all points in time, {X0, X1, X2, . . . }. Each XtX_tXt​ takes on values that are in the state space, so if X1 = 3, then at time 1, we are at state 3.

What makes such a sequence of random variables a Markov Chain is the Markov Property, which says that if you want to predict where the chain is at at a future time, you only need to use the present state, and not any past information. In other words, the given the present, the future and past are conditionally independent.

Mathematically, this says:

P(Xn+1=j∣X0=i0,X1=i1,…,Xn=in)=P(Xn+1=j∣Xn=in)P(X_{n+1} = j | X_0 = i_0, X_1 = i_1, \dots, X_n = i_n) = P(X_{n+1} = j | X_n = i_n)P(Xn+1​=j∣X0​=i0​,X1​=i1​,…,Xn​=in​)=P(Xn+1​=j∣Xn​=in​)

In words: Given that my history of states has been i0,i2…ini_0, i_2 \ldots i_ni0​,i2​…in​, the distribution of where my next state will be doesn’t depend on any of that history besides ini_nin​, the most recent state.

State Properties

A state is either recurrent or transient.

  • If you start at a Recurrent State, then you will always return back to that state at some point in the

    future. You can leave, but you’ll always return at some point.

  • Otherwise you are at a Transient State. There is some probability that once you leave you will never return. There’s a chance that you’ll leave and never come back

A state is either periodic or aperiodic.

  • If you start at a Periodic State of period k, then the GCD of all of the possible number steps it would take to return back is k (which should be > 1).

  • Otherwise you are at an Aperiodic State. The GCD of all of the possible number of steps it would take to return back is 1.

Transition Matrix

Element qijq_{ij}qij​ in square transition matrix Q is the probability that the chain goes from state iii to state jjj, or more formally:

qij=P(Xn+1=j∣Xn=i)q_{ij} = P(X_{n+1} = j | X_n = i)qij​=P(Xn+1​=j∣Xn​=i)

To find the probability that the chain goes from state iii to state jjj in mmm steps, take the (i, j)-th} element of QmQ^mQm.

qij(m)=P(Xn+m=j∣Xn=i)q^{(m)}_{ij} = P(X_{n+m} = j | X_n = i)qij(m)​=P(Xn+m​=j∣Xn​=i)

If X0X_0X0​ is distributed according to row-vector PMF p⃗\vec{p}p​ (e.g. pj=P(X0=ij)p_j = P(X_0 = i_j)pj​=P(X0​=ij​)), then the marginal PMF of XnX_nXn​ is p⃗Qn\vec{p}Q^np​Qn.

Chain Properties

A chain is irreducible if you can get from anywhere to anywhere. An irreducible chain must have all of its states recurrent. A chain is periodic if any of its states are periodic, and is aperiodic if none of its states are periodic. In an irreducible chain, all states have the same period.

A chain is reversible with respect to s⃗\vec{s}s if siqij=sjqjis_iq_{ij} = s_jq_{ji}si​qij​=sj​qji​ for all i,ji, ji,j. A reversible chain running on s⃗\vec{s}s is indistinguishable whether it is running forwards in time or backwards in time. Examples of reversible chains include random walks on undirected networks, or any chain with qij=qjiq_{ij} = q_{ji}qij​=qji​, where the Markov chain would be stationary with respect to s⃗=(1M,1M,…,1M)\vec{s} = (\frac{1}{M}, \frac{1}{M}, \dots, \frac{1}{M})s=(M1​,M1​,…,M1​).

Reversibility Condition Implies Stationarity - If you have a PMF s⃗\vec{s}s on a Markov chain with transition matrix QQQ, then siqij=sjqjis_iq_{ij} = s_jq_{ji}si​qij​=sj​qji​ for all i,ji, ji,j implies that sss is stationary.

Stationary Distribution

Let us say that the vector p⃗=(p1,p2,…,pM)\vec{p} = (p_1, p_2, \dots, p_M)p​=(p1​,p2​,…,pM​) is a possible and valid PMF of where the Markov Chain is at at a certain time. We will call this vector the stationary distribution, s⃗\vec{s}s, if it satisfies s⃗Q=s⃗\vec{s}Q = \vec{s}sQ=s.

As a consequence, if XtX_tXt​ has the stationary distribution, then all future Xt+1,Xt+2,…X_{t+1}, X_{t + 2}, \dotsXt+1​,Xt+2​,… also has the stationary distribution.

  • If a Markov Chain is irreducible, then it has a unique stationary distribution. In addition, all entries of this stationary distribution are non-zero (which could have been inferred from the fact that all states are recurrent).

    • Example: In the Gambler's Ruin problem, which is not irreducible, what ultimately happens to the chain can either be that one's money is always 000 or always NNN.

  • If a Markov Chain is irreducible and aperiodic, then it has a unique stationary distribution s⃗\vec{s}s and the chain converges to the stationary distribution. lim⁡n→∞P(Xn=i)=s⃗i\lim_{n \to \infty} P(X_n = i) = \vec{s}_ilimn→∞​P(Xn​=i)=si​

    • Example: Imagine a Markov chain which is just a cycle, and hence is periodic. Then, depending on where we start, P(Xn=i)P(X_n = i)P(Xn​=i) will be either 0 or 1 deterministically, and surely won't converge to the stationary distribution, which is uniform across all nodes in the cycle.

  • If a Markov Chain is irreducible and aperiodic and the stationary distribution exists and is unique, then the expected number of steps to return back to iii starting from iii is 1/si1/s_i1/si​, where sis_isi​ is the long-run probability of a chain being at state iii. To solve for the stationary distribution, you can solve for s⃗Q=s⃗\vec{s}Q = \vec{s}sQ=s or (QT−I)s⃗T=0(Q^T - I)\vec{s}^T = 0(QT−I)sT=0. The stationary distribution is uniform if the columns of QQQ sum to 1.

Random Walk on Undirected Network

If you have a certain number of nodes with undirected edges between them, and a chain can pick any edge uniformly at random and move to another node, then this is a random walk on an undirected network. The stationary distribution can be easily calculated. Let did_idi​ be the degree of the i-th node, meaning the number of edges connected to this node. Then, we have:

s⃗i=di∑idi\vec{s}_i = \frac{d_i}{\sum_i d_i}si​=∑i​di​di​​

PreviousQuestionsNextQuestions

Last updated 4 years ago

Was this helpful?