Difference between revisions of "Bahadur efficiency"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | b1100401.png | ||
+ | $#A+1 = 60 n = 0 | ||
+ | $#C+1 = 60 : ~/encyclopedia/old_files/data/B110/B.1100040 Bahadur efficiency | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | The large sample study of test statistics in a given hypotheses testing problem is commonly based on the following concept of asymptotic Bahadur efficiency [[#References|[a1]]], [[#References|[a2]]] (cf. also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). Let $ \Theta _ {0} $ | |
+ | and $ \Theta _ {1} $ | ||
+ | be the parametric sets corresponding to the null hypothesis and its alternative, respectively. Assume that large values of a test statistic (cf. [[Test statistics|Test statistics]]) $ T _ {n} = T _ {n} ( \mathbf x ) $ | ||
+ | based on a random sample $ \mathbf x = ( x _ {1} \dots x _ {n} ) $ | ||
+ | give evidence against the null hypothesis. For a fixed $ \theta \in \Theta _ {0} $ | ||
+ | and a real number $ t $, | ||
+ | put $ F _ {n} ( t \mid \theta ) = {\mathsf P} _ \theta ( T _ {n} < t ) $ | ||
+ | and let $ L _ {n} ( t \mid \theta ) = 1 - F _ {n} ( t \mid \theta ) $. | ||
+ | The random quantity $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) $ | ||
+ | is the $ {\mathsf P} $- | ||
+ | value corresponding to the statistic $ T $ | ||
+ | when $ \theta _ {0} $ | ||
+ | is the true parametric value. For example, if $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) < \alpha $, | ||
+ | the null hypothesis $ \Theta _ {0} = \{ \theta _ {0} \} $ | ||
+ | is rejected at the [[Significance level|significance level]] $ \alpha $. | ||
+ | If for $ \eta \in \Theta _ {1} $ | ||
+ | with $ {\mathsf P} _ \eta $- | ||
+ | probability one, | ||
− | + | $$ | |
+ | {\lim\limits } 2n ^ {- 1 } { \mathop{\rm log} } L _ {n} ( T ( \mathbf x ) \mid \theta ) = - d ( \eta \mid \theta ) , | ||
+ | $$ | ||
− | + | then $ d ( \eta \mid \theta ) $ | |
+ | is called the Bahadur exact slope of $ T $. | ||
+ | The larger the Bahadur exact slope, the faster the rate of decay of the $ {\mathsf P} $- | ||
+ | value under the alternative. It is known that for any $ T $, | ||
+ | $ d ( \eta \mid \theta ) \leq 2K ( \eta, \theta ) $, | ||
+ | where $ K ( \eta, \theta ) $ | ||
+ | is the information number corresponding to $ {\mathsf P} _ \eta $ | ||
+ | and $ {\mathsf P} _ \theta $. | ||
+ | A test statistic $ T $ | ||
+ | is called Bahadur efficient at $ \eta $ | ||
+ | if | ||
− | + | $$ | |
+ | e _ {T} ( \eta ) = \inf _ \theta { | ||
+ | \frac{1}{2} | ||
+ | } d ( \eta \mid \theta ) = \inf _ \theta K ( \eta, \theta ) . | ||
+ | $$ | ||
− | + | The concept of Bahadur efficiency allows one to compare two (sequences of) test statistics $ T ^ {( 1 ) } $ | |
+ | and $ T ^ {( 2 ) } $ | ||
+ | from the following perspective. Let $ N _ {i} $, | ||
+ | $ i = 1,2 $, | ||
+ | be the smallest sample size required to reject $ \Theta _ {0} $ | ||
+ | at the significance level $ \alpha $ | ||
+ | on the basis of a random sample $ \mathbf x = ( x _ {1} , \dots ) $ | ||
+ | when $ \eta $ | ||
+ | is the true parametric value. The ratio $ { {N _ {2} } / {N _ {1} } } $ | ||
+ | gives a measure of relative efficiency of $ T ^ {( 1 ) } $ | ||
+ | to $ T ^ {( 2 ) } $. | ||
+ | To reduce the number of arguments $ \alpha $, | ||
+ | $ \mathbf x $ | ||
+ | and $ \eta $, | ||
+ | one usually considers the random variable which is the limit of this ratio, as $ \alpha \rightarrow 0 $. | ||
+ | In many situations this limit does not depend on $ \mathbf x $, | ||
+ | so it represents the efficiency of $ T ^ {( 1 ) } $ | ||
+ | against $ T ^ {( 2 ) } $ | ||
+ | at $ \eta $ | ||
+ | with the convenient formula | ||
− | To evaluate the exact slope, the following result ([[#References|[a2]]], Thm. 7.2) is commonly used. Assume that for any | + | $$ |
+ | {\lim\limits } _ {\alpha \rightarrow 0 } { | ||
+ | \frac{N _ {2} }{N _ {1} } | ||
+ | } = { | ||
+ | \frac{d _ {1} ( \eta \mid \theta _ {0} ) }{d _ {2} ( \eta \mid \theta _ {0} ) } | ||
+ | } , | ||
+ | $$ | ||
+ | |||
+ | where $ d _ {1} $ | ||
+ | and $ d _ {2} $ | ||
+ | are the corresponding Bahadur slopes. | ||
+ | |||
+ | To evaluate the exact slope, the following result ([[#References|[a2]]], Thm. 7.2) is commonly used. Assume that for any $ \eta $ | ||
+ | with $ {\mathsf P} _ \eta $- | ||
+ | probability one as $ n \rightarrow \infty $, | ||
+ | $ T _ {n} ( \mathbf x ) \rightarrow b ( \eta ) $ | ||
+ | and the limit $ g _ \theta ( t ) = {\lim\limits } L _ {n} ( t \mid \theta ) $ | ||
+ | exists for $ t $ | ||
+ | taking values in an open interval and is a continuous function there. Then the exact slope of $ T $ | ||
+ | at $ ( \eta, \theta ) $ | ||
+ | has the form $ d ( \eta \mid \theta ) = g _ \theta ( b ( \eta ) ) $. | ||
+ | See [[#References|[a4]]] for generalizations of this formula. | ||
The exact Bahadur slopes of many classical tests have been found. See [[#References|[a3]]]. | The exact Bahadur slopes of many classical tests have been found. See [[#References|[a3]]]. |
Revision as of 10:26, 27 April 2020
The large sample study of test statistics in a given hypotheses testing problem is commonly based on the following concept of asymptotic Bahadur efficiency [a1], [a2] (cf. also Statistical hypotheses, verification of). Let $ \Theta _ {0} $
and $ \Theta _ {1} $
be the parametric sets corresponding to the null hypothesis and its alternative, respectively. Assume that large values of a test statistic (cf. Test statistics) $ T _ {n} = T _ {n} ( \mathbf x ) $
based on a random sample $ \mathbf x = ( x _ {1} \dots x _ {n} ) $
give evidence against the null hypothesis. For a fixed $ \theta \in \Theta _ {0} $
and a real number $ t $,
put $ F _ {n} ( t \mid \theta ) = {\mathsf P} _ \theta ( T _ {n} < t ) $
and let $ L _ {n} ( t \mid \theta ) = 1 - F _ {n} ( t \mid \theta ) $.
The random quantity $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) $
is the $ {\mathsf P} $-
value corresponding to the statistic $ T $
when $ \theta _ {0} $
is the true parametric value. For example, if $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) < \alpha $,
the null hypothesis $ \Theta _ {0} = \{ \theta _ {0} \} $
is rejected at the significance level $ \alpha $.
If for $ \eta \in \Theta _ {1} $
with $ {\mathsf P} _ \eta $-
probability one,
$$ {\lim\limits } 2n ^ {- 1 } { \mathop{\rm log} } L _ {n} ( T ( \mathbf x ) \mid \theta ) = - d ( \eta \mid \theta ) , $$
then $ d ( \eta \mid \theta ) $ is called the Bahadur exact slope of $ T $. The larger the Bahadur exact slope, the faster the rate of decay of the $ {\mathsf P} $- value under the alternative. It is known that for any $ T $, $ d ( \eta \mid \theta ) \leq 2K ( \eta, \theta ) $, where $ K ( \eta, \theta ) $ is the information number corresponding to $ {\mathsf P} _ \eta $ and $ {\mathsf P} _ \theta $. A test statistic $ T $ is called Bahadur efficient at $ \eta $ if
$$ e _ {T} ( \eta ) = \inf _ \theta { \frac{1}{2} } d ( \eta \mid \theta ) = \inf _ \theta K ( \eta, \theta ) . $$
The concept of Bahadur efficiency allows one to compare two (sequences of) test statistics $ T ^ {( 1 ) } $ and $ T ^ {( 2 ) } $ from the following perspective. Let $ N _ {i} $, $ i = 1,2 $, be the smallest sample size required to reject $ \Theta _ {0} $ at the significance level $ \alpha $ on the basis of a random sample $ \mathbf x = ( x _ {1} , \dots ) $ when $ \eta $ is the true parametric value. The ratio $ { {N _ {2} } / {N _ {1} } } $ gives a measure of relative efficiency of $ T ^ {( 1 ) } $ to $ T ^ {( 2 ) } $. To reduce the number of arguments $ \alpha $, $ \mathbf x $ and $ \eta $, one usually considers the random variable which is the limit of this ratio, as $ \alpha \rightarrow 0 $. In many situations this limit does not depend on $ \mathbf x $, so it represents the efficiency of $ T ^ {( 1 ) } $ against $ T ^ {( 2 ) } $ at $ \eta $ with the convenient formula
$$ {\lim\limits } _ {\alpha \rightarrow 0 } { \frac{N _ {2} }{N _ {1} } } = { \frac{d _ {1} ( \eta \mid \theta _ {0} ) }{d _ {2} ( \eta \mid \theta _ {0} ) } } , $$
where $ d _ {1} $ and $ d _ {2} $ are the corresponding Bahadur slopes.
To evaluate the exact slope, the following result ([a2], Thm. 7.2) is commonly used. Assume that for any $ \eta $ with $ {\mathsf P} _ \eta $- probability one as $ n \rightarrow \infty $, $ T _ {n} ( \mathbf x ) \rightarrow b ( \eta ) $ and the limit $ g _ \theta ( t ) = {\lim\limits } L _ {n} ( t \mid \theta ) $ exists for $ t $ taking values in an open interval and is a continuous function there. Then the exact slope of $ T $ at $ ( \eta, \theta ) $ has the form $ d ( \eta \mid \theta ) = g _ \theta ( b ( \eta ) ) $. See [a4] for generalizations of this formula.
The exact Bahadur slopes of many classical tests have been found. See [a3].
References
[a1] | R.R. Bahadur, "Rates of convergence of estimates and tests statistics" Ann. Math. Stat. , 38 (1967) pp. 303–324 |
[a2] | R.R. Bahadur, "Some limit theorems in statistics" , Regional Conf. Ser. Applied Math. , SIAM (1971) |
[a3] | Ya.Yu. Nikitin, "Asymptotic efficiency of nonparametric tests" , Cambridge Univ. Press (1995) |
[a4] | L.J. Gleser, "Large deviation indices and Bahadur exact slopes" Statistics and Decision , 1 (1984) pp. 193–204 |
Bahadur efficiency. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bahadur_efficiency&oldid=45582