STAT110: Fall 2020
  • Welcome!
  • FAQ
  • Resources
  • Section 1
    • Questions
  • Section 2
    • Questions
  • Section 3
    • Questions
  • Section 4
    • Questions
  • Section 5
    • Questions
  • Section 6
    • Questions
  • Section 7
    • Questions
  • Section 8
    • Questions
  • Section 9
    • Questions
  • Section 10
    • Questions
Powered by GitBook
On this page
  • Conditional Expectations
  • Variance
  • Indicator Random Variables
  • The Fundamental Bridge

Was this helpful?

Section 4

Expectations, Variance, and the Fundamental Bridge (BH Chapter 4)

The Expected Value (or expectation, mean) of a random variable can be thought of as the "weighted average" of the possible outcomes of the random variable. Mathematically, if x1,x2,x3,⋯x_1, x_2, x_3,\cdotsx1​,x2​,x3​,⋯, are all of the possible values that X can take, the expected value of X can be calculated as follows:

E(X∣A)=∑ixiP(X=xi∣A)E(X | A) = \sum\limits_{i}x_iP(X=x_i | A)E(X∣A)=i∑​xi​P(X=xi​∣A)

Linearity of Expectation

The most important property of expected value is Linearity of Expectation. For any two random variables X and Y, a and b scaling coefficients and c is our constant, the following property of holds:

E(aX+bY+c)=aE(X)+bE(Y)+cE(aX + bY + c) = aE(X) + bE(Y) + cE(aX+bY+c)=aE(X)+bE(Y)+c

The above is true regardless of whether X and Y are independent.

Conditional Expectations

Conditional distributions are still distributions. Treating them as a whole and applying the definition of expectation gives:

E(X∣A)=∑ixiP(X=xi∣A)E(X | A) = \sum\limits_{i}x_iP(X=x_i | A)E(X∣A)=i∑​xi​P(X=xi​∣A)

Variance

Variance tells us how spread out the distribution of a random variable is. It is defined as

Var(X)=E(X−E(X))2=E(X2)−(E(X))2Var(X) = E(X - E(X))^2 = E(X^2) - (E(X))^2Var(X)=E(X−E(X))2=E(X2)−(E(X))2

Properties of Variance

  • Var(cX)=c2Var(X)Var(cX) = c^2 Var(X)Var(cX)=c2Var(X)

  • Var(X±Y)=Var(X)+Var(Y)Var(X \pm Y) = Var(X) + Var(Y)Var(X±Y)=Var(X)+Var(Y) if XXX and YYY are independent

Indicator Random Variables

Indicator Random Variables are random variables whose value is 1 when a particular event happens, or 0 when it does not. Let IAI_AIA​ be an indicator random variable for the event AAA. Then, we have:

IA={1A occurs0A does not occurI_A = \begin{cases} 1 & \text{$$A$$ occurs} \\ 0 & \text{$$A$$ does not occur} \end{cases}IA​={10​A occursA does not occur​

Suppose P(A)=pP(A) = pP(A)=p. Then, I∼Bern(p)I \sim Bern(p)I∼Bern(p) because III has a ppp chance of being 1, and a 1−p1-p1−p chance of being 0.

Properties of Indicators

  • (IA)2=IA(I_A)^2 = I_A(IA​)2=IA​, and (IA)k=IA(I_A)^k = I_A(IA​)k=IA​ for any power kkk.

  • IAc=1−IAI_{A^c} = 1 - I_AIAc​=1−IA​

  • IA∩B=IAIBI_{A \cap B} = I_A I_BIA∩B​=IA​IB​ is the indicator for the event A∩BA \cap BA∩B (that is, IAIB=1I_A I_B = 1IA​IB​=1 if and only if AAA and BBB occur, and 0 otherwise)

  • IA∪B=IA+IB−IAIBI_{A \cup B} = I_A + I_B - I_A I_BIA∪B​=IA​+IB​−IA​IB​

The Fundamental Bridge

The fundamental bridge is the idea that E(IA)=P(A)E(I_A) = P(A)E(IA​)=P(A). When we want to calculate the expected value of a complicated event, sometimes we can break it down into many indicator random variables, and then apply linearity of expectation on that. For example, if X=I1+I2+…+InX = I_1 + I_2 + \ldots + I_nX=I1​+I2​+…+In​, then:

E(X)=E(I1)+E(I2)+…+E(In)=P(I1)+P(I2)+…+P(In)E(X) = E(I_1) + E(I_2) + \ldots + E(I_n) = P(I_1) + P(I_2) + \ldots + P(I_n)E(X)=E(I1​)+E(I2​)+…+E(In​)=P(I1​)+P(I2​)+…+P(In​)
PreviousQuestionsNextQuestions

Last updated 4 years ago

Was this helpful?