Namespaces
Variants
Actions

Difference between revisions of "Similar statistic"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
s0851401.png
 +
$#A+1 = 32 n = 0
 +
$#C+1 = 32 : ~/encyclopedia/old_files/data/S085/S.0805140 Similar statistic
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A statistic having a fixed probability distribution under some compound hypothesis.
 
A statistic having a fixed probability distribution under some compound hypothesis.
  
Let the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851401.png" /> map the sample space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851402.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851403.png" />, into a measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851404.png" /> and consider some compound hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851405.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851406.png" />. In that case, if for any event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851407.png" /> the probability
+
Let the statistic $  T $
 +
map the sample space $  ( \mathfrak X , {\mathcal B} _ {\mathfrak X} , {\mathsf P} _  \theta  ) $,  
 +
$  \theta \in \Theta $,  
 +
into a measurable space $  ( \mathfrak A, {\mathcal B} _ {\mathfrak Y} ) $
 +
and consider some compound hypothesis $  H _ {0} $:  
 +
$  \theta \in \Theta _ {0} \subseteq \Theta $.  
 +
In that case, if for any event $  B \in {\mathcal B} _ {\mathfrak Y} $
 +
the probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851408.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
{\mathsf P} _  \theta  ( T  ^ {-} 1 ( B)) \  \textrm{ is }  \textrm{ independent } \
 +
\textrm{ of }  \theta  \textrm{ for }  \theta \in \Theta _ {0} ,
 +
$$
  
one says that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s0851409.png" /> is a similar statistic with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514010.png" />, or simply that it is a similar statistic. It is clear that condition (*) is equivalent to saying that the distribution of the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514011.png" /> does not vary when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514012.png" /> runs through <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514013.png" />. With this property in view, it is frequently said of a similar statistic that it is independent of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514015.png" />. Similar statistics play a large role in constructing similar tests, and also in solving statistical problems with nuisance parameters.
+
one says that $  T $
 +
is a similar statistic with respect to $  H _ {0} $,  
 +
or simply that it is a similar statistic. It is clear that condition (*) is equivalent to saying that the distribution of the statistic $  T $
 +
does not vary when $  \theta $
 +
runs through $  \Theta _ {0} $.  
 +
With this property in view, it is frequently said of a similar statistic that it is independent of the parameter $  \theta $,  
 +
$  \theta \in \Theta _ {0} $.  
 +
Similar statistics play a large role in constructing similar tests, and also in solving statistical problems with nuisance parameters.
  
Example 1. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514016.png" /> be independent random variables with identical normal distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514017.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514019.png" />. Then for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514020.png" /> the statistic
+
Example 1. Let $  X _ {1} \dots X _ {n} $
 +
be independent random variables with identical normal distribution $  N _ {1} ( a, \sigma  ^ {2} ) $
 +
with $  | a | < \infty $
 +
and  $  \sigma > 0 $.  
 +
Then for any $  \alpha > 0 $
 +
the statistic
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514021.png" /></td> </tr></table>
+
$$
 +
= \left ( {\sum _ { i= } 1 ^ { n }  ( X _ {i} - \overline{X}\; )  ^ {2} } \right ) ^
 +
{- \alpha } \sum _ { i= } 1 ^ { n }  ( X _ {i} - \overline{X}\; ) ^ {2 \alpha } ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514022.png" /></td> </tr></table>
+
$$
 +
\overline{X}\; =
 +
\frac{1}{n}
 +
\sum _ { i= } 1 ^ { n }  X _ {i} ,
 +
$$
  
is independent of the two-dimensional parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514023.png" />.
+
is independent of the two-dimensional parameter $  ( a, \sigma  ^ {2} ) $.
  
Example 2. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514024.png" /> be independent identically-distributed random variables whose distribution functions belong to the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514025.png" /> of all continuous distribution functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514026.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514027.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514028.png" /> are empirical distribution functions constructed from the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514029.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514030.png" />, respectively, then the Smirnov statistic
+
Example 2. Let $  X _ {1} \dots X _ {n+} m $
 +
be independent identically-distributed random variables whose distribution functions belong to the family $  {\mathcal F} = \{ F ( x) \} $
 +
of all continuous distribution functions on $  ( - \infty , + \infty ) $.  
 +
If $  F _ {n} ( x) $
 +
and $  F _ {m} ( x) $
 +
are empirical distribution functions constructed from the observations $  X _ {1} \dots X _ {n} $
 +
and $  X _ {n+} 1 \dots X _ {n+} m $,  
 +
respectively, then the Smirnov statistic
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514031.png" /></td> </tr></table>
+
$$
 +
S _ {n,m}  = \sup _ {| x| < \infty }  | F _ {n} ( x) - F _ {m} ( x) |
 +
$$
  
is similar with respect to the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085140/s08514032.png" />.
+
is similar with respect to the family $  {\mathcal F} $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.-L. Soler,  "Basic structures in mathematical statistics" , Moscow  (1972)  (In Russian; translated from French)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.-R. Barra,  "Mathematical bases of statistics" , Acad. Press  (1981)  (Translated from French)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.-L. Soler,  "Basic structures in mathematical statistics" , Moscow  (1972)  (In Russian; translated from French)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.-R. Barra,  "Mathematical bases of statistics" , Acad. Press  (1981)  (Translated from French)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>

Revision as of 08:13, 6 June 2020


A statistic having a fixed probability distribution under some compound hypothesis.

Let the statistic $ T $ map the sample space $ ( \mathfrak X , {\mathcal B} _ {\mathfrak X} , {\mathsf P} _ \theta ) $, $ \theta \in \Theta $, into a measurable space $ ( \mathfrak A, {\mathcal B} _ {\mathfrak Y} ) $ and consider some compound hypothesis $ H _ {0} $: $ \theta \in \Theta _ {0} \subseteq \Theta $. In that case, if for any event $ B \in {\mathcal B} _ {\mathfrak Y} $ the probability

$$ \tag{* } {\mathsf P} _ \theta ( T ^ {-} 1 ( B)) \ \textrm{ is } \textrm{ independent } \ \textrm{ of } \theta \textrm{ for } \theta \in \Theta _ {0} , $$

one says that $ T $ is a similar statistic with respect to $ H _ {0} $, or simply that it is a similar statistic. It is clear that condition (*) is equivalent to saying that the distribution of the statistic $ T $ does not vary when $ \theta $ runs through $ \Theta _ {0} $. With this property in view, it is frequently said of a similar statistic that it is independent of the parameter $ \theta $, $ \theta \in \Theta _ {0} $. Similar statistics play a large role in constructing similar tests, and also in solving statistical problems with nuisance parameters.

Example 1. Let $ X _ {1} \dots X _ {n} $ be independent random variables with identical normal distribution $ N _ {1} ( a, \sigma ^ {2} ) $ with $ | a | < \infty $ and $ \sigma > 0 $. Then for any $ \alpha > 0 $ the statistic

$$ T = \left ( {\sum _ { i= } 1 ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} } \right ) ^ {- \alpha } \sum _ { i= } 1 ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2 \alpha } , $$

where

$$ \overline{X}\; = \frac{1}{n} \sum _ { i= } 1 ^ { n } X _ {i} , $$

is independent of the two-dimensional parameter $ ( a, \sigma ^ {2} ) $.

Example 2. Let $ X _ {1} \dots X _ {n+} m $ be independent identically-distributed random variables whose distribution functions belong to the family $ {\mathcal F} = \{ F ( x) \} $ of all continuous distribution functions on $ ( - \infty , + \infty ) $. If $ F _ {n} ( x) $ and $ F _ {m} ( x) $ are empirical distribution functions constructed from the observations $ X _ {1} \dots X _ {n} $ and $ X _ {n+} 1 \dots X _ {n+} m $, respectively, then the Smirnov statistic

$$ S _ {n,m} = \sup _ {| x| < \infty } | F _ {n} ( x) - F _ {m} ( x) | $$

is similar with respect to the family $ {\mathcal F} $.

References

[1] J.-L. Soler, "Basic structures in mathematical statistics" , Moscow (1972) (In Russian; translated from French)
[2] Yu.V. Linnik, "Statistical problems with nuisance parameters" , Amer. Math. Soc. (1968) (Translated from Russian)
[3] J.-R. Barra, "Mathematical bases of statistics" , Acad. Press (1981) (Translated from French)

Comments

References

[a1] E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)
How to Cite This Entry:
Similar statistic. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Similar_statistic&oldid=11699
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article