Namespaces
Variants
Actions

Hotelling test

From Encyclopedia of Mathematics
Revision as of 22:11, 5 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


$ T ^ {2} $- test

A test intended for testing a hypothesis $ H _ {0} $ according to which the true value of the unknown vector $ \mu = ( \mu _ {1} \dots \mu _ {p} ) $ of mathematical expectation of a non-degenerate $ p $- dimensional normal law $ N ( \mu , B) $ whose covariance matrix $ B $ is also unknown, is the vector $ \mu = ( \mu _ {10} \dots \mu _ {p0} ) $. Hotelling's test is based on the following result. Let $ X _ {1} \dots X _ {n} $ be independent $ p $- dimensional random vectors, $ n - 1 \geq p $, subject to the non-degenerate normal law $ N ( \mu , B) $, and let

$$ T ^ {2} = \ n ( \overline{X}\; - \mu _ {0} ) ^ {T } S ^ {-} 1 ( \overline{X}\; - \mu _ {0} ), $$

where

$$ \overline{X}\; = { \frac{1}{n} } \sum _ {i = 1 } ^ { n } X _ {i} $$

and

$$ S = \frac{1}{n - 1 } \sum _ {i = 1 } ^ { n } ( X _ {i} - \overline{X}\; ) ( X _ {i} - \overline{X}\; ) ^ {T } $$

are maximum-likelihood estimators for the unknown parameters $ \mu $ and $ B $. Then the statistic

$$ F = \ \frac{n - p }{p ( n - 1) } T ^ {2} $$

has the non-central Fisher $ F $- distribution with $ p $ and $ n - p $ degrees of freedom and non-centrality parameter

$$ n ( \mu - \mu _ {0} ) ^ {T } B ^ {-} 1 ( \mu - \mu _ {0} ); $$

the statistic $ T ^ {2} $ has the Hotelling $ T ^ {2} $- distribution. Consequently, to test the hypothesis $ H _ {0} $: $ \mu = \mu _ {0} $ against the alternative $ H _ {1} $: $ \mu \neq \mu _ {0} $ one can compute the values of the statistic $ F $ based on realizations of the independent random vectors $ X _ {1} \dots X _ {n} $ from the non-degenerate $ p $- dimensional normal law $ N ( \mu , B) $, which under the hypothesis $ H _ {0} $ has the central $ F $- distribution with $ p $ and $ n - p $ degrees of freedom. Using Hotelling's test with significance level $ \alpha $, $ H _ {0} $ must be rejected if $ F \geq F _ \alpha ( p, n - p) $, where $ F _ \alpha ( p, n - p) $ is the $ \alpha $- quantile of the $ F $- distribution. The connection between Hotelling's test and the generalized likelihood-ratio test should be mentioned. Let

$$ L ( \mu , B) = \ L ( X _ {1} \dots X _ {n} ; \mu , B) = $$

$$ = \ \frac{| B ^ {-} 1 | ^ {n/2} }{( 2 \pi ) ^ {np/2} } \mathop{\rm exp} \left \{ - { \frac{1}{2} } \sum _ {i = 1 } ^ { n } ( X _ {i} - \mu ) ^ {T } B ^ {-} 1 ( X _ {i} - \mu ) \right \} $$

be the likelihood function computed from the sample $ X _ {1} \dots X _ {n} $. The generalized likelihood-ratio test for testing the simple hypothesis $ H _ {0} $: $ \mu = \mu _ {0} $ against the compound alternative $ H _ {1} $: $ \mu \neq \mu _ {0} $ is constructed from the statistic

$$ \lambda = \ \lambda ( X _ {1} \dots X _ {n} ) = \ \frac{\sup _ { B } L ( \mu _ {0} , B) }{\sup _ {\mu , B } L ( \mu , B) } . $$

The statistic $ \lambda $ and the statistics $ T ^ {2} $ and $ F $ are related by:

$$ \lambda ^ {2/n} = \ \frac{n - 1 }{T ^ {2} + n - 1 } = \ \frac{n - p }{pF + n - p } . $$

For testing the hypothesis $ H _ {0} $: $ \mu = \mu _ {0} $, Hotelling's test is uniformly most powerful among all tests that are invariant under similarity transformations (see Most-powerful test; Invariant test).

References

[1] T.W. Anderson, "An introduction to multivariate statistical analysis" , Wiley (1984)
[2] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1973)
How to Cite This Entry:
Hotelling test. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hotelling_test&oldid=19014
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article