# Consistent test

consistent statistical test

A statistical test that reliably distinguishes a hypothesis to be tested from an alternative by increasing the number of observations to infinity.

Let $X _ {1} \dots X _ {n}$ be a sequence of independent identically-distributed random variables taking values in a sample space $( \mathfrak X , {\mathcal B} , {\mathsf P} _ \theta )$, $\theta \in \Theta$, and suppose one is testing the hypothesis $H _ {0}$: $\theta \in \Theta _ {0} \subset \Theta$ against the alternative $H _ {1}$: $\theta \in \Theta _ {1} = \Theta \setminus \Theta _ {0}$, with an error of the first kind (see Significance level) being given in advance and equal to $\alpha$( $0 < \alpha < 0.5$). Suppose that the first $n$ observations $X _ {1} \dots X _ {n}$ are used to construct a statistical test of level $\alpha$ for testing $H _ {0}$ against $H _ {1}$, and let $\beta _ {n} ( \theta )$, $\theta \in \Theta$, be its power function (cf. Power function of a test), which gives for every $\theta$ the probability that this test rejects $H _ {0}$ when the random variable $X _ {i}$ is subject to the law ${\mathsf P} _ \theta$. Of course $\beta _ {n} ( \theta ) \leq \alpha$ for all $\theta \in \Theta$. By increasing the number of observations without limit it is possible to construct a sequence of statistical tests of a prescribed level $\alpha$ intended to test $H _ {0}$ against $H _ {1}$; the corresponding sequence of power functions $\{ \beta _ {n} ( \theta ) \}$ satisfies the condition

$$\beta _ {n} ( \theta ) \leq \alpha \ \ \textrm{ for } \textrm{ any } n \ \textrm{ and } \textrm{ all } \ \theta \in \Theta _ {0} .$$

If under these conditions the sequence of power functions $\{ \beta _ {n} ( \theta ) \}$ is such that, for any fixed $\theta \in \Theta _ {1} = \Theta \setminus \Theta _ {0}$,

$$\lim\limits _ {n \rightarrow \infty } \ \beta _ {n} ( \theta ) = 1,$$

then one says that a consistent sequence of statistical tests of level $\alpha$ has been constructed for testing $H _ {0}$ against $H _ {1}$. With a certain amount of license, one says that a consistent test has been constructed. Since $\beta _ {n} ( \theta )$, $\theta \in \Theta _ {1}$( which is the restriction of $\beta _ {n} ( \theta )$, $\theta \in \Theta = \Theta _ {0} \cup \Theta _ {1}$, to $\Theta _ {1}$), is the power of the statistical test constructed from the observations $X _ {1} \dots X _ {n}$, the property of consistency of a sequence of statistical tests can be expressed as follows: The corresponding powers $\beta _ {n} ( \theta )$, $\theta \in \Theta _ {1}$, converge on $\Theta _ {1}$ to the function identically equal to 1 on $\Theta _ {1}$.

Example. Let $X _ {1} \dots X _ {n}$ be independent identically-distributed random variables whose distribution function belongs to the family $H = \{ F ( x) \}$ of all continuous distribution functions on $\mathbf R ^ {1}$, and let $p = ( p _ {1} \dots p _ {k} )$ be a vector of positive probabilities such that $p _ {1} + \dots + p _ {k} = 1$. Further, let $F _ {0} ( x)$ be any distribution function of $H$. Then $F _ {0} ( x)$ and $p$ uniquely determine a partition of the real axis into $k$ intervals $( x _ {0} ; x _ {1} ] \dots ( x _ {k - 1 } ; x _ {k} ]$, where

$$x _ {0} = - \infty ,\ \ x _ {k} = + \infty ,$$

$$x _ {i} = F _ {0} ^ { - 1 } ( p _ {1} + \dots + p _ {i} ) = \inf \{ x: F _ {0} ( x) \geq p _ {1} + \dots + p _ {i} \} ,$$

$$i = 1 \dots k - 1.$$

In other words, the end points of the intervals are quantiles of the distribution function $F _ {0} ( x)$. These intervals determine a partition of $H$ into two disjoint sets $H _ {0}$ and $H _ {1}$ as follows: A distribution function $F$ of $H$ belongs to $H _ {0}$ if and only if

$$F ( x _ {i} ) - F ( x _ {i - 1 } ) = p _ {i} ,\ \ i = 1 \dots k,$$

and otherwise $F \in H _ {1}$. Now let $\nu _ {n} = ( \nu _ {n,1} \dots \nu _ {n,k} )$ be the vector of counts obtained as a result of grouping the first $n$ random variables $X _ {1} \dots X _ {n}$( $n > k$) into the intervals $( x _ {0} ; x _ {1} ] \dots ( x _ {k - 1 } ; x _ {k} ]$. Then to test the hypothesis $H _ {0}$ that the distribution function of the $X _ {i}$ belongs to the set $H _ {0}$ against the alternative $H _ {1}$ that it belongs to the set $H _ {1}$, one can make use of the "chi-squared" test based on the statistic

$$X _ {n} ^ {2} = \ \sum _ {i = 1 } ^ { k } \frac{( \nu _ {n,i} - np _ {i} ) ^ {2} }{np _ {i} } .$$

According to this, with significance level $\alpha$( $0 < \alpha < 0.5$), the hypothesis $H _ {0}$ must be rejected whenever $X _ {n} ^ {2} > \chi _ {k - 1 } ^ {2} ( \alpha )$, where $\chi _ {k - 1 } ^ {2} ( \alpha )$ is the upper $\alpha$- quantile of the "chi-squared" distribution with $k - 1$ degrees of freedom. From the general theory of tests of "chi-squared" type it follows that when $H _ {1}$ is correct,

$$\lim\limits _ {n \rightarrow \infty } \ {\mathsf P} \{ X _ {n} ^ {2} > \chi _ {k - 1 } ^ {2} ( \alpha ) \mid H _ {1} \} = 1,$$

which also shows the consistency of the "chi-squared" test for testing $H _ {0}$ against $H _ {1}$. But if one takes an arbitrary non-empty subset of $H _ {0}$ and considers the problem of testing against the alternative $H _ {0} ^ {**} = H _ {0} \setminus H _ {0} ^ {*}$, then it is clear that the "chi-squared" sequence of tests based on the statistics $X _ {n} ^ {2}$ is not consistent, since

$$\lim\limits _ {n \rightarrow \infty } \ {\mathsf P} \{ X _ {n} ^ {2} > \chi _ {k - 1 } ^ {2} ( \alpha ) \mid \ H _ {0} \} \leq \alpha < 1,$$

and, in particular,

$$\lim\limits _ {n \rightarrow \infty } \ {\mathsf P} \{ X _ {n} ^ {2} > \chi _ {k - 1 } ^ {2} ( \alpha ) \mid \ H _ {0} ^ {**} \} \leq \alpha < 1.$$

#### References

 [1] S.S. Wilks, "Mathematical statistics" , Wiley (1962) [2] E. Lehman, "Testing statistical hypotheses" , Wiley (1959)
How to Cite This Entry:
Consistent test. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Consistent_test&oldid=46482
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article