5.1.5 Conditional Expectation (Revisited) and Conditional Variance

In Section 5.1.3, we briefly discussed conditional expectation. Here, we will discuss the properties of conditional expectation in more detail as they are quite useful in practice. We will also discuss conditional variance. An important concept here is that we interpret the conditional expectation as a random variable.

Conditional Expectation as a Function of a Random Variable:

Remember that the conditional expectation of X given that Y=y is given by E[X|Y=y]=xiRXxiPX|Y(xi|y).
Note that E[X|Y=y] depends on the value of y. In other words, by changing y, E[X|Y=y] can also change. Thus, we can say E[X|Y=y] is a function of y, so let's write g(y)=E[X|Y=y].
Thus, we can think of g(y)=E[X|Y=y] as a function of the value of random variable Y. We then write g(Y)=E[X|Y].
We use this notation to indicate that E[X|Y] is a random variable whose value equals g(y)=E[X|Y=y] when Y=y. Thus, if Y is a random variable with range RY={y1,y2,}, then E[X|Y] is also a random variable with E[X|Y]={E[X|Y=y1]with probability P(Y=y1)E[X|Y=y2]with probability P(Y=y2)......
Let's look at an example.
Example Let X=aY+b. Then E[X|Y=y]=E[aY+b|Y=y]=ay+b. Here, we have g(y)=ay+b, and therefore, E[X|Y]=aY+b,
which is a function of the random variable Y.


Since E[X|Y] is a random variable, we can find its PMF, CDF, variance, etc. Let's look at an example to better understand E[X|Y].

Example Consider two random variables X and Y with joint PMF given in Table 5.2. Let Z=E[X|Y].
  1. Find the Marginal PMFs of X and Y.
  2. Find the conditional PMF of X given Y=0 and Y=1, i.e., find PX|Y(x|0) and PX|Y(x|1).
  3. Find the PMF of Z.
  4. Find EZ, and check that EZ=EX.
  5. Find Var(Z).

Table 5.2: Joint PMF of X and Y in example 5.11

 Y=0Y=1
X=01525
X=1250
  • Solution
      1. Using the table we find out PX(0)=15+25=35,PX(1)=25+0=25,PY(0)=15+25=35,PY(1)=25+0=25.
        Thus, the marginal distributions of X and Y are both Bernoulli(25). However, note that X and Y are not independent.
      2. We have PX|Y(0|0)=PXY(0,0)PY(0)=1535=13.
        Thus, PX|Y(1|0)=113=23.
        We conclude X|Y=0Bernoulli(23).
        Similarly, we find PX|Y(0|1)=1,PX|Y(1|1)=0.
        Thus, given Y=1, we have always X=0.
      3. We note that the random variable Y can take two values: 0 and 1. Thus, the random variable Z=E[X|Y] can take two values as it is a function of Y. Specifically, Z=E[X|Y]={E[X|Y=0]if Y=0E[X|Y=1]if Y=1
        Now, using the previous part, we have E[X|Y=0]=23,E[X|Y=1]=0,
        and since P(y=0)=35, and P(y=1)=25, we conclude that Z=E[X|Y]={23with probability 350with probability 25
        So we can write PZ(z)={35if z=2325if z=00otherwise
      4. Now that we have found the PMF of Z, we can find its mean and variance. Specifically, E[Z]=2335+025=25.
        We also note that EX=25. Thus, here we have E[X]=E[Z]=E[E[X|Y]].
        In fact, as we will prove shortly, the above equality always holds. It is called the law of iterated expectations.
      5. To find Var(Z), we write Var(Z)=E[Z2](EZ)2=E[Z2]425,
        where E[Z2]=4935+025=415.
        Thus, Var(Z)=415425=875.


Example
Let X and Y be two random variables and g and h be two functions. Show that E[g(X)h(Y)|X]=g(X)E[h(Y)|X].
  • Solution
    • Note that E[g(X)h(Y)|X] is a random variable that is a function of X. In particular, if X=x, then E[g(X)h(Y)|X]=E[g(X)h(Y)|X=x]. Now, we can write E[g(X)h(Y)|X=x]=E[g(x)h(Y)|X=x]=g(x)E[h(Y)|X=x](since g(x) is a constant).
      Thinking of this as a function of the random variable X, it can be rewritten as E[g(X)h(Y)|X]=g(X)E[h(Y)|X]. This rule is sometimes called "taking out what is known." The idea is that, given X, g(X) is a known quantity, so it can be taken out of the conditional expectation.
    E[g(X)h(Y)|X]=g(X)E[h(Y)|X](5.6)


Iterated Expectations:

Let us look again at the law of total probability for expectation. Assuming g(Y)=E[X|Y], we have E[X]=yjRYE[X|Y=yj]PY(yj)=yjRYg(yj)PY(yj)=E[g(Y)]by LOTUS (Equation 5.2)=E[E[X|Y]].
Thus, we conclude E[X]=E[E[X|Y]].(5.7)
This equation might look a little confusing at first, but it is just another way of writing the law of total expectation (Equation 5.4). To better understand it, let's solve Example 5.7 using this terminology. In that example, we want to find EX. We can write E[X]=E[E[X|N]]=E[Np](since X|NBinomial(N,p))=pE[N]=pλ.
Equation 5.7 is called the law of iterated expectations. Since it is basically the same as Equation 5.4, it is also called the law of total expectation [3].
Law of Iterated Expectations: E[X]=E[E[X|Y]]

Expectation for Independent Random Variables:

Note that if two random variables X and Y are independent, then the conditional PMF of X given Y will be the same as the marginal PMF of X, i.e., for any xRX, we have PX|Y(x|y)=PX(x).
Thus, for independent random variables, we have E[X|Y=y]=xRXxPX|Y(x|y)=xRXxPX(x)=E[X].
Again, thinking of this as a random variable depending on Y, we obtain E[X|Y]=E[X], when X and Y are independent.
More generally, if X and Y are independent then any function of X, say g(X), and Y are independent, thus E[g(X)|Y]=E[g(X)].
Remember that for independent random variables, PXY(x,y)=PX(x)PY(y). From this, we can show that E[XY]=EXEY.

Lemma
If X and Y are independent, then E[XY]=EXEY. Using LOTUS, we have E[XY]=xRxyRyxyPXY(x,y)=xRxyRyxyPX(x)PY(y)=(xRxxPX(x))(yRyyPY(y))=EXEY.
Note that the converse is not true. That is, if the only thing that we know about X and Y is that E[XY]=EXEY, then X and Y may or may not be independent. Using essentially the same proof as above, we can show if X and Y are independent, then E[g(X)h(Y)]=E[g(X)]E[h(Y)] for any functions g:RR and h:RR.
If X and Y are independent random variables, then
  1. E[X|Y]=EX;
  2. E[g(X)|Y]=E[g(X)];
  3. E[XY]=EXEY;
  4. E[g(X)h(Y)]=E[g(X)]E[h(Y)].

Conditional Variance:

Similar to the conditional expectation, we can define the conditional variance of X, Var(X|Y=y), which is the variance of X in the conditional space where we know Y=y. If we let μX|Y(y)=E[X|Y=y], then Var(X|Y=y)=E[(XμX|Y(y))2|Y=y]=xiRX(xiμX|Y(y))2PX|Y(xi)=E[X2|Y=y]μX|Y(y)2.
Note that Var(X|Y=y) is a function of y. Similar to our discussion on E[X|Y=y] and E[X|Y], we define Var(X|Y) as a function of the random variable Y. That is, Var(X|Y) is a random variable whose value equals Var(X|Y=y) whenever Y=y. Let us look at an example.
Example
Let X, Y, and Z=E[X|Y] be as in Example 5.11. Let also V=Var(X|Y).
  1. Find the PMF of V.
  2. Find EV.
  3. Check that Var(X)=E(V)+Var(Z).
  • Solution
      In Example 5.11, we found out that X,YBernoulli(25). We also obtained X|Y=0Bernoulli(23),P(X=0|Y=1)=1,Var(Z)=875.
      1. To find the PMF of V, we note that V is a function of Y. Specifically, V=Var(X|Y)={Var(X|Y=0)if Y=0Var(X|Y=1)if Y=1
        Therefore, V=Var(X|Y)={Var(X|Y=0)with probability 35Var(X|Y=1)with probability 25
        Now, since X|Y=0Bernoulli(23), we have Var(X|Y=0)=2313=29,
        and since given Y=1, X=0, we have Var(X|Y=1)=0.
        Thus, V=Var(X|Y)={29with probability 350with probability 25
        So we can write PV(v)={35if v=2925if v=00otherwise
      2. To find EV, we write EV=2935+025=215.
      3. To check that Var(X)=E(V)+Var(Z), we just note that Var(X)=2535=625,EV=215,Var(Z)=875.


In the above example, we checked that Var(X)=E(V)+Var(Z), which says Var(X)=E(Var(X|Y))+Var(E[X|Y]).
It turns out this is true in general and it is called the law of total variance, or variance decomposition formula [3]. Let us first prove the law of total variance, and then we explain it intuitively. Note that if V=Var(X|Y), and Z=E[X|Y], then V=E[X2|Y](E[X|Y])2=E[X2|Y]Z2.
Thus, EV=E[E[X2|Y]]E[Z2]=E[X2]E[Z2](law of iterated expectations(Equation 5.7))(5.8)
Next, we have Var(Z)=E[Z2](EZ)2=E[Z2](EX)2(law of iterated expectations)(5.9)
Combining Equations 5.8 and 5.9, we obtain the law of total variance.

Law of Total Variance:

Var(X)=E[Var(X|Y)]+Var(E[X|Y])(5.10)

There are several ways that we can look at the law of total variance to get some intuition. Let us first note that all the terms in Equation 5.10 are positive (since variance is always positive). Thus, we conclude Var(X)E(Var(X|Y))(5.11)

This states that when we condition on Y, the variance of X reduces on average. To describe this intuitively, we can say that variance of a random variable is a measure of our uncertainty about that random variable. For example, if Var(X)=0, we do not have any uncertainty about X. Now, the above inequality simply states that if we obtain some extra information, i.e., we know the value of Y, our uncertainty about the value of the random variable X reduces on average. So, the above inequality makes sense. Now, how do we explain the whole law of total variance?

To describe the law of total variance intuitively, it is often useful to look at a population divided into several groups. In particular, suppose that we have this random experiment: We pick a person in the world at random and look at his/her height. Let's call the resulting value X. Define another random variable Y whose value depends on the country of the chosen person, where Y=1,2,3,...,n, and n is the number of countries in the world. Then, let's look at the two terms in the law of total variance.

Var(X)=E(Var(X|Y))+Var(E[X|Y]).
Note that Var(X|Y=i) is the variance of X in country i. Thus, E(Var(X|Y)) is the average of variances in each country. On the other hand, E[X|Y=i] is the average height in country i. Thus, Var(E[X|Y]) is the variance between countries. So, we can interpret the law of total variance in the following way. Variance of X can be decomposed into two parts: the first is the average of variances in each individual country, while the second is the variance between height averages in each country.

Example
Let N be the number of customers that visit a certain store in a given day. Suppose that we know E[N] and Var(N). Let Xi be the amount that the ith customer spends on average. We assume Xi's are independent of each other and also independent of N. We further assume they have the same mean and variance EXi=EX,Var(Xi)=Var(X).
Let Y be the store's total sales, i.e., Y=Ni=1Xi.
Find EY and Var(Y).
  • Solution
    • To find EY, we cannot directly use the linearity of expectation because N is random. But, conditioned on N=n, we can use linearity and find E[Y|N=n]; so, we use the law of iterated expectations: EY=E[E[Y|N]](law of iterated expectations)=E[E[Ni=1Xi|N]]=E[Ni=1E[Xi|N]](linearity of expectation)=E[Ni=1E[Xi]](Xi's and N are indpendent)=E[NE[X]](since EXi=EXs)=E[X]E[N](since EX is not random).
      To find Var(Y), we use the law of total variance: Var(Y)=E(Var(Y|N))+Var(E[Y|N])=E(Var(Y|N))+Var(NEX)(as above)=E(Var(Y|N))+(EX)2Var(N)(5.12)
      To find E(Var(Y|N)), note that, given N=n, Y is a sum of n independent random variables. As we discussed before, for n independent random variables, the variance of the sum is equal to sum of the variances. This fact is officially proved in Section 5.3 and also in Chapter 6, but we have occasionally used it as it simplifies the analysis. Thus, we can write Var(Y|N)=Ni=1Var(Xi|N)=Ni=1Var(Xi)(since Xi's are independent of N)=NVar(X).
      Thus, we have E(Var(Y|N))=ENVar(X)(5.13)
      Combining Equations 5.12 and 5.13, we obtain Var(Y)=ENVar(X)+(EX)2Var(N).



The print version of the book is available on Amazon.

Book Cover


Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

ractical Uncertaintly Cover