5.1.5 Conditional Expectation (Revisited) and Conditional Variance
Conditional Expectation as a Function of a Random Variable:
Remember that the conditional expectation of X given that Y=y is given by E[X|Y=y]=∑xi∈RXxiPX|Y(xi|y).Example Let X=aY+b. Then E[X|Y=y]=E[aY+b|Y=y]=ay+b. Here, we have g(y)=ay+b, and therefore, E[X|Y]=aY+b,
Since E[X|Y] is a random variable, we can find its PMF, CDF, variance, etc. Let's look at an example to better understand E[X|Y].
Example Consider two random variables X and Y with joint PMF given in Table 5.2. Let Z=E[X|Y].
- Find the Marginal PMFs of X and Y.
- Find the conditional PMF of X given Y=0 and Y=1, i.e., find PX|Y(x|0) and PX|Y(x|1).
- Find the PMF of Z.
- Find EZ, and check that EZ=EX.
- Find Var(Z).
Table 5.2: Joint PMF of X and Y in example 5.11
Y=0 | Y=1 | |
X=0 | 15 | 25 |
X=1 | 25 | 0 |
- Solution
-
- Using the table we find out
PX(0)=15+25=35,PX(1)=25+0=25,PY(0)=15+25=35,PY(1)=25+0=25.Thus, the marginal distributions of X and Y are both Bernoulli(25). However, note that X and Y are not independent.
- We have
PX|Y(0|0)=PXY(0,0)PY(0)=1535=13.Thus, PX|Y(1|0)=1−13=23.We conclude X|Y=0∼Bernoulli(23).Similarly, we find PX|Y(0|1)=1,PX|Y(1|1)=0.Thus, given Y=1, we have always X=0.
- We note that the random variable Y can take two values: 0 and 1. Thus, the random variable Z=E[X|Y] can take two values as it is a function of Y. Specifically,
Z=E[X|Y]={E[X|Y=0]if Y=0E[X|Y=1]if Y=1Now, using the previous part, we have E[X|Y=0]=23,E[X|Y=1]=0,and since P(y=0)=35, and P(y=1)=25, we conclude that Z=E[X|Y]={23with probability 350with probability 25So we can write PZ(z)={35if z=2325if z=00otherwise
- Now that we have found the PMF of Z, we can find its mean and variance. Specifically,
E[Z]=23⋅35+0⋅25=25.We also note that EX=25. Thus, here we have E[X]=E[Z]=E[E[X|Y]].In fact, as we will prove shortly, the above equality always holds. It is called the law of iterated expectations.
- To find Var(Z), we write
Var(Z)=E[Z2]−(EZ)2=E[Z2]−425,where E[Z2]=49⋅35+0⋅25=415.Thus, Var(Z)=415−425=875.
- Using the table we find out
PX(0)=15+25=35,PX(1)=25+0=25,PY(0)=15+25=35,PY(1)=25+0=25.
-
Example
Let X and Y be two random variables and g and h be two functions. Show that E[g(X)h(Y)|X]=g(X)E[h(Y)|X].
- Solution
-
Note that E[g(X)h(Y)|X] is a random variable that is a function of X. In particular, if X=x, then E[g(X)h(Y)|X]=E[g(X)h(Y)|X=x]. Now, we can write
E[g(X)h(Y)|X=x]=E[g(x)h(Y)|X=x]=g(x)E[h(Y)|X=x](since g(x) is a constant).Thinking of this as a function of the random variable X, it can be rewritten as E[g(X)h(Y)|X]=g(X)E[h(Y)|X]. This rule is sometimes called "taking out what is known." The idea is that, given X, g(X) is a known quantity, so it can be taken out of the conditional expectation.
E[g(X)h(Y)|X]=g(X)E[h(Y)|X](5.6) -
Note that E[g(X)h(Y)|X] is a random variable that is a function of X. In particular, if X=x, then E[g(X)h(Y)|X]=E[g(X)h(Y)|X=x]. Now, we can write
E[g(X)h(Y)|X=x]=E[g(x)h(Y)|X=x]=g(x)E[h(Y)|X=x](since g(x) is a constant).
Iterated Expectations:
Let us look again at the law of total probability for expectation. Assuming g(Y)=E[X|Y], we have E[X]=∑yj∈RYE[X|Y=yj]PY(yj)=∑yj∈RYg(yj)PY(yj)=E[g(Y)]by LOTUS (Equation 5.2)=E[E[X|Y]].Expectation for Independent Random Variables:
Note that if two random variables X and Y are independent, then the conditional PMF of X given Y will be the same as the marginal PMF of X, i.e., for any x∈RX, we have PX|Y(x|y)=PX(x).Lemma
If X and Y are independent, then E[XY]=EXEY. Using LOTUS, we have E[XY]=∑x∈Rx∑y∈RyxyPXY(x,y)=∑x∈Rx∑y∈RyxyPX(x)PY(y)=(∑x∈RxxPX(x))(∑y∈RyyPY(y))=EXEY.
- E[X|Y]=EX;
- E[g(X)|Y]=E[g(X)];
- E[XY]=EXEY;
- E[g(X)h(Y)]=E[g(X)]E[h(Y)].
Conditional Variance:
Similar to the conditional expectation, we can define the conditional variance of X, Var(X|Y=y), which is the variance of X in the conditional space where we know Y=y. If we let μX|Y(y)=E[X|Y=y], then Var(X|Y=y)=E[(X−μX|Y(y))2|Y=y]=∑xi∈RX(xi−μX|Y(y))2PX|Y(xi)=E[X2|Y=y]−μX|Y(y)2.Example
Let X, Y, and Z=E[X|Y] be as in Example 5.11. Let also V=Var(X|Y).
- Find the PMF of V.
- Find EV.
- Check that Var(X)=E(V)+Var(Z).
- Solution
-
In Example 5.11, we found out that X,Y∼Bernoulli(25). We also obtained
X|Y=0∼Bernoulli(23),P(X=0|Y=1)=1,Var(Z)=875.
- To find the PMF of V, we note that V is a function of Y. Specifically,
V=Var(X|Y)={Var(X|Y=0)if Y=0Var(X|Y=1)if Y=1Therefore, V=Var(X|Y)={Var(X|Y=0)with probability 35Var(X|Y=1)with probability 25Now, since X|Y=0∼Bernoulli(23), we have Var(X|Y=0)=23⋅13=29,and since given Y=1, X=0, we have Var(X|Y=1)=0.Thus, V=Var(X|Y)={29with probability 350with probability 25So we can write PV(v)={35if v=2925if v=00otherwise
- To find EV, we write
EV=29⋅35+0⋅25=215.
- To check that Var(X)=E(V)+Var(Z), we just note that
Var(X)=25⋅35=625,EV=215,Var(Z)=875.
- To find the PMF of V, we note that V is a function of Y. Specifically,
V=Var(X|Y)={Var(X|Y=0)if Y=0Var(X|Y=1)if Y=1
In the above example, we checked that Var(X)=E(V)+Var(Z), which says Var(X)=E(Var(X|Y))+Var(E[X|Y]).
Law of Total Variance:
Var(X)=E[Var(X|Y)]+Var(E[X|Y])(5.10)There are several ways that we can look at the law of total variance to get some intuition. Let us first note that all the terms in Equation 5.10 are positive (since variance is always positive). Thus, we conclude Var(X)≥E(Var(X|Y))(5.11)
This states that when we condition on Y, the variance of X reduces on average. To describe this intuitively, we can say that variance of a random variable is a measure of our uncertainty about that random variable. For example, if Var(X)=0, we do not have any uncertainty about X. Now, the above inequality simply states that if we obtain some extra information, i.e., we know the value of Y, our uncertainty about the value of the random variable X reduces on average. So, the above inequality makes sense. Now, how do we explain the whole law of total variance?
To describe the law of total variance intuitively, it is often useful to look at a population divided into several groups. In particular, suppose that we have this random experiment: We pick a person in the world at random and look at his/her height. Let's call the resulting value X. Define another random variable Y whose value depends on the country of the chosen person, where Y=1,2,3,...,n, and n is the number of countries in the world. Then, let's look at the two terms in the law of total variance.
Var(X)=E(Var(X|Y))+Var(E[X|Y]).Example
Let N be the number of customers that visit a certain store in a given day. Suppose that we know E[N] and Var(N). Let Xi be the amount that the ith customer spends on average. We assume Xi's are independent of each other and also independent of N. We further assume they have the same mean and variance EXi=EX,Var(Xi)=Var(X).
- Solution
-
To find EY, we cannot directly use the linearity of expectation because N is random. But, conditioned on N=n, we can use linearity and find E[Y|N=n]; so, we use the law of iterated expectations:
EY=E[E[Y|N]](law of iterated expectations)=E[E[N∑i=1Xi|N]]=E[N∑i=1E[Xi|N]](linearity of expectation)=E[N∑i=1E[Xi]](Xi's and N are indpendent)=E[NE[X]](since EXi=EXs)=E[X]E[N](since EX is not random).To find Var(Y), we use the law of total variance: Var(Y)=E(Var(Y|N))+Var(E[Y|N])=E(Var(Y|N))+Var(NEX)(as above)=E(Var(Y|N))+(EX)2Var(N)(5.12)To find E(Var(Y|N)), note that, given N=n, Y is a sum of n independent random variables. As we discussed before, for n independent random variables, the variance of the sum is equal to sum of the variances. This fact is officially proved in Section 5.3 and also in Chapter 6, but we have occasionally used it as it simplifies the analysis. Thus, we can write Var(Y|N)=N∑i=1Var(Xi|N)=N∑i=1Var(Xi)(since Xi's are independent of N)=NVar(X).Thus, we have E(Var(Y|N))=ENVar(X)(5.13)Combining Equations 5.12 and 5.13, we obtain Var(Y)=ENVar(X)+(EX)2Var(N).
-
To find EY, we cannot directly use the linearity of expectation because N is random. But, conditioned on N=n, we can use linearity and find E[Y|N=n]; so, we use the law of iterated expectations:
EY=E[E[Y|N]](law of iterated expectations)=E[E[N∑i=1Xi|N]]=E[N∑i=1E[Xi|N]](linearity of expectation)=E[N∑i=1E[Xi]](Xi's and N are indpendent)=E[NE[X]](since EXi=EXs)=E[X]E[N](since EX is not random).