Let there be i.i.d. r.v.s. $X_1, X_2, \ldots X_n$ with mean $\mu$ and variance $\sigma^2$.

Q

A

Q

Give a value of $n$ (a specific number) that will ensure that there is at least a 99% chance that the sample mean will be within 2 standard deviations of the true mean $\mu$.

A

Let $\bar{X}$ be the sample mean, which is equal to $\frac{X_1 + \ldots + X_n}{n}$. Saying that there is at least a 99% chance that $|\bar{X}_n - \mu| < 2 \sigma$ is the same as saying that there is at most a 1% chance for $|\bar{X}_n - \mu| > 2 \sigma$. Thus, we want to calculate an $n$ such that:

$P\left( |\bar{X}_n - \mu | > 2 \sigma \right) < 0.01$

Applying Chebyshev's inequality, we get the following:

$P(|\bar{X}_n - \mu| > 2 \sigma) \leq \frac{Var \bar{X}_n}{(2 \sigma)^2} = \frac{\frac{\sigma^2}{n}}{4 \sigma^2} = \frac{1}{4n}$

Therefore, if we choose $n = 25$, we get the desired inequality.

Q

A

Q

Explain why a Gamma random variable with parameters $(n, \lambda)$ is approximately Normal when n is large.

A

Let $X_n = Y_1 + Y_2 + \ldots + Y_n$. Then, we can have $X_n \sim Gamma(n, \lambda)$ and $Y_i$'s be i.i.d. $Expo(\lambda)$. By the central limit theorem, since $X_n$ is the sum of i.i.d. random variables, it converges to a normal distribution as $n \to \infty$.

Q

A

Q

Let $X_n \sim Gamma(n, \lambda)$. Determine $a$ and $b$ such that $\frac{X_n - a}{b} \to N(0,1)$ as $n \to \infty$

A

Using the parameters of the Gamma distribution, the central limit theorem also tells us that

$X_n \sim N\left(\frac{n}{\lambda}, \frac{n}{\lambda^2} \right)$

In order to convert this normal distribution to a standard normal ($N(0,1)$), all we need to do is subtract the mean and divide by the standard deviation. Thus:

$\frac{X_n - \frac{n}{\lambda}}{\frac{\sqrt{n}}{\lambda}} \sim N(0,1)$

as $n \to \infty$.

Suppose $X_n$ is a two-state Markov chain with transition matrix

$Q =
\left(\begin{array}{cc}
1-\alpha & \alpha \\
\beta & 1-\beta
\end{array}\right)$

The rows and columns are indexed 0,1 such that $q_{0,0}=1-\alpha, q_{0,1}=\alpha, q_{1,0}=\beta, q_{1,1}=1-\beta$.

Q

A

Q

Find the stationary distribution $\vec{s} = (s_0, s_1)$ of $X_n$ by solving $\vec{s} Q = \vec{s}$.

A

By solving $\vec{s} Q = \vec{s}$, we have that

$s_0 = s_0 (1 - \alpha) + s_1 \beta \textrm{ and } s_1 = s_0 \alpha + s_1 (1 - \beta)$

And by solving this system of linear equations, it follows that

$\vec{s} = \left({\frac{\beta}{\alpha + \beta}, \frac{\alpha}{\alpha + \beta}}\right)$

Q

A

Q

Show that this Markov Chain is reversible under the stationary distribution found in the previous question.

A

To verify the validity of a stationary distribution for a chain, we just need to show that $s_i q_{ij} = s_j q_{ji}$, which is done if we can show that $s_0 q_{01} = s_1 q_{10}$. We have that

$s_0 q_{01} = \frac{\alpha\beta}{\alpha + \beta} = s_1 q_{10}$

which satisfies our reversibility condition and verifies our stationary distribution from the previous question.

Q

A

Q

Let $Z_n = (X_{n-1}, X_n)$. Is $Z_n$ a Markov chain? If so, what are the states and transition matrix?

A

Yes, $Z_n$ is a Markov Chain because conditional on $Z_n$, $Z_{n+1}$ is independent to $Z_{n-1}$. This is because the components of $Z_{n+1}$ and $Z_{n-1}$ are either constants conditioned in $Z_n$ or independent of each other given that $X_n$ is a Markov Chain.

The states are given as $\{ (0, 0), (0, 1), (1, 0), (1, 1) \}$. The transition matrix is given as

$Q=
\begin{array}{ccccc}
& (0,0) & (0,1) & (1,0) & (1,1) \\
(0,0) & 1-\alpha & \alpha & 0 & 0 \\
(0,1) & 0 & 0 & \beta & 1-\beta \\
(1,0) & 1-\alpha & \alpha & 0 & 0 \\
(1,1) & 0 & 0 & \beta & 1-\beta
\end{array}$