8.4.5 Likelihood Ratio Tests
So far we have focused on specific examples of hypothesis testing problems. Here, we would like to introduce a relatively general hypothesis testing procedure called the likelihood ratio test. Before doing so, let us quickly review the definition of the likelihood function, which was previously discussed in Section 8.2.3.
Review of the Likelihood Function:
Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$.- - If the $X_i$'s are discrete, then the likelihood function is defined as
\begin{align}
\nonumber L(x_1, x_2, \cdots, x_n; \theta)=P_{X_1 X_2 \cdots X_n}(x_1, x_2, \cdots, x_n; \theta).
\end{align}
- - If the $X_i$'s are jointly continuous, then the likelihood function is defined as \begin{align} \nonumber L(x_1, x_2, \cdots, x_n; \theta)=f_{X_1 X_2 \cdots X_n}(x_1, x_2, \cdots, x_n; \theta). \end{align}
Likelihood Ratio Tests:
Consider a hypothesis testing problem in which both the null and the alternative hypotheses are simple. That is
$\quad$ $H_0$: $\theta = \theta_0$,
$\quad$ $H_1$: $\theta = \theta_1$.
Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$. To decide between two simple hypotheses
$\quad$ $H_0$: $\theta = \theta_0$,
$\quad$ $H_1$: $\theta = \theta_1$,
Example
Here, we look again at the radar problem (Example 8.23). More specifically, we observe the random variable $X$: \begin{align}%\label{} X&=\theta+W, \end{align} where $W \sim N(0, \sigma^2=\frac{1}{9})$. We need to decide between
$\quad$ $H_0$: $\theta = \theta_0=0$,
$\quad$ $H_1$: $\theta = \theta_1=1$.
- Solution
- If $\theta = \theta_0=0$, then $X \sim N(0, \sigma^2=\frac{1}{9})$. Therefore, \begin{align} \nonumber L(x; \theta_0)=f_{X}(x; \theta_0)=\frac{3}{\sqrt{2 \pi}} e^{-\frac{9x^2}{2}}. \end{align} On the other hand, if $\theta = \theta_1=1$, then $X \sim N(1, \sigma^2=\frac{1}{9})$. Therefore, \begin{align} \nonumber L(x; \theta_1)=f_{X}(x; \theta_1)=\frac{3}{\sqrt{2 \pi}} e^{-\frac{9(x-1)^2}{2}}. \end{align} Therefore, \begin{align}%\label{} \lambda(x)=\frac {L(x; \theta_0)}{L(x; \theta_1)}&=\exp \left\{-\frac{9x^2}{2}+ \frac{9(x-1)^2}{2} \right\}\\ &=\exp \left\{ \frac{9(1-2x)}{2} \right\}. \end{align} Thus, we accept $H_0$ if \begin{align}%\label{} \exp \left\{ \frac{9(1-2x)}{2} \right\} \geq c, \end{align} where $c$ is the threshold. Equivalently, we accept $H_0$ if \begin{align}%\label{} x \leq \frac{1}{2} \left(1-\frac{2}{9} \ln c\right). \end{align} Let us define $c'=\frac{1}{2} \left(1-\frac{2}{9} \ln c\right)$, where $c'$ is a new threshold. Remember that $x$ is the observed value of the random variable $X$. Thus, we can summarize the decision rule as follows. We accept $H_0$ if \begin{align}%\label{} X \leq c'. \end{align} How to do we choose $c'$? We use the required $\alpha$. \begin{align} P(\textrm{type I error}) &= P(\textrm{Reject }H_0 \; | \; H_0) \\ &= P(X > c' \; | \; H_0)\\ &= P(X>c') \quad \big( \textrm{where }X \sim N\left(0, \frac{1}{9}\right) \big) \\ &=1-\Phi(3c'). \end{align} Letting $P(\textrm{type I error})=\alpha$, we obtain \begin{align} c' = \frac{1}{3} \Phi^{-1}(1-\alpha). \end{align} Letting $\alpha=0.05$, we obtain \begin{align} c' = \frac{1}{3} \Phi^{-1}(.95) =0.548 \end{align} As we see, in this case, the likelihood ratio test is exactly the same test that we obtained in Example 8.23.
How do we perform the likelihood ratio test if the hypotheses are not simple? Suppose that $\theta$ is an unknown parameter. Let $S$ be the set of possible values for $\theta$ and suppose that we can partition $S$ into two disjoint sets $S_0$ and $S_1$. Consider the following hypotheses:
$\quad$ $H_0$: $\theta \in S_0$,
$\quad$ $H_1$: $\theta \in S_1$.
Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$. Define \begin{align}%\label{} \lambda(x_1,x_2,\cdots, x_n)=\frac{\sup \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S_0 \}}{\sup \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S \}}. \end{align} To perform a likelihood ratio test (LRT), we choose a constant $c$ in $[0,1]$. We reject $H_0$ if $\lambda \lt c$ and accept it if $\lambda \geq c$. The value of $c$ can be chosen based on the desired $\alpha$.