Namespaces
Variants
Actions

Difference between revisions of "Neyman structure"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
A structure determined by a statistic that is independent of a sufficient statistic. The concept was introduced by J. Neyman (see [[#References|[1]]]) in connection with the problem of constructing similar tests (cf. [[Similar test|Similar test]]) in the theory of statistical hypothesis testing, and the term  "Neyman structure"  is used when referring to the structure of a statistical test if its critical function has Neyman structure. Suppose that in the realization of a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666101.png" /> taking values in a sample space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666102.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666103.png" />, it is required to verify a composite hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666104.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666105.png" /> and that for the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666106.png" /> there exists a [[Sufficient statistic|sufficient statistic]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666107.png" /> with distribution in the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666108.png" />. Then any statistical test of level <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n0666109.png" /> intended for testing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661010.png" /> has Neyman structure if its critical function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661011.png" /> satisfies the condition:
+
<!--
 +
n0666101.png
 +
$#A+1 = 64 n = 0
 +
$#C+1 = 64 : ~/encyclopedia/old_files/data/N066/N.0606610 Neyman structure
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
almost everywhere with respect to the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661014.png" />. Evidently, if a statistical test has Neyman structure, then it is similar (cf. [[Similar test|Similar test]]) relative to the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661015.png" />, since
+
A structure determined by a statistic that is independent of a sufficient statistic. The concept was introduced by J. Neyman (see [[#References|[1]]]) in connection with the problem of constructing similar tests (cf. [[Similar test|Similar test]]) in the theory of statistical hypothesis testing, and the term  "Neyman structure" is used when referring to the structure of a statistical test if its critical function has Neyman structure. Suppose that in the realization of a random variable  $  X $
 +
taking values in a sample space  $  ( \mathfrak X , \mathfrak B , P _  \theta  ) $,
 +
$  \theta \in \Theta $,  
 +
it is required to verify a composite hypothesis  $  H _ {0} $:
 +
$  \theta \in \Theta _ {0} \subset  \Theta $
 +
and that for the family  $  \{ {P _  \theta  } : {\theta \in \Theta _ {0} } \} $
 +
there exists a [[Sufficient statistic|sufficient statistic]] $  T $
 +
with distribution in the family $  \{ {P _ {e}  ^ {T} } : {e \in \Theta _ {0} } \} $.
 +
Then any statistical test of level  $  \alpha $
 +
intended for testing  $  H _ {0} $
 +
has Neyman structure if its critical function  $  \phi $
 +
satisfies the condition:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661016.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
{\mathsf E} \{ \phi ( X) \mid  T = t \}  = \alpha
 +
$$
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661017.png" />.
+
almost everywhere with respect to the measure  $  P _  \theta  ^ {T} $,
 +
$  \theta \in \Theta _ {0} $.  
 +
Evidently, if a statistical test has Neyman structure, then it is similar (cf. [[Similar test|Similar test]]) relative to the family  $  \{ {P _  \theta  } : {\theta \in \Theta _ {0} } \} $,
 +
since
  
The validity of (1) essentially reduces the problem of testing the composite hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661018.png" /> to that of testing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661019.png" /> as a simple hypothesis for every fixed value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661020.png" /> of the sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661021.png" />.
+
$$
 +
{\mathsf E} _  \theta  \{ \phi ( X) \}  = \
 +
{\mathsf E} _  \theta  \{ {\mathsf E} \{ \phi ( X) \mid  T = t \} \}  = \alpha
 +
$$
  
Example. Suppose that two independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661022.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661023.png" /> are subject to Poisson laws with unknown parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661024.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661025.png" /> (cf. [[Poisson distribution|Poisson distribution]]) and that the hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661026.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661027.png" /> is to be tested against the alternative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661028.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661029.png" />. Thanks to the independence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661030.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661031.png" /> the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661032.png" /> is subject to the Poisson law with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661033.png" /> and the conditional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661034.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661035.png" /> under the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661036.png" /> are binomial with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661038.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661040.png" />, respectively, that is,
+
for all  $  \theta \in \Theta _ {0} $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661041.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
The validity of (1) essentially reduces the problem of testing the composite hypothesis  $  H _ {0} $
 +
to that of testing  $  H _ {0} $
 +
as a simple hypothesis for every fixed value  $  t $
 +
of the sufficient statistic  $  T $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661042.png" /></td> </tr></table>
+
Example. Suppose that two independent random variables  $  X _ {1} $
 +
and  $  X _ {2} $
 +
are subject to Poisson laws with unknown parameters  $  \lambda _ {1} $
 +
and  $  \lambda _ {2} $(
 +
cf. [[Poisson distribution|Poisson distribution]]) and that the hypothesis  $  H _ {0} $:  
 +
$  \lambda _ {1} = \lambda _ {2} $
 +
is to be tested against the alternative  $  H _ {1} $:  
 +
$  \lambda _ {1} \neq \lambda _ {2} $.
 +
Thanks to the independence of  $  X _ {1} $
 +
and  $  X _ {2} $
 +
the statistic  $  T = X _ {1} + X _ {2} $
 +
is subject to the Poisson law with parameter  $  \lambda _ {1} + \lambda _ {2} $
 +
and the conditional distributions of  $  X _ {1} $
 +
and  $  X _ {2} $
 +
under the condition  $  T = t $
 +
are binomial with parameters  $  t $,
 +
$  \lambda _ {1} / ( \lambda _ {1} + \lambda _ {2} ) $
 +
and  $  t $,
 +
$  \lambda _ {2} / ( \lambda _ {1} + \lambda _ {2} ) $,
 +
respectively, that is,
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661043.png" /> is valid, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661044.png" /> is sufficient for the unknown common value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661045.png" />, and from (2) it follows that when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661046.png" /> holds, then the conditional distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661047.png" /> for a fixed value of the sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661048.png" /> is binomial with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661049.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661050.png" />, that is, under <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661051.png" />,
+
$$ \tag{2 }
 +
{\mathsf P} \{ X _ {i} = k \mid  T = t \}  = \
 +
\left ( \begin{array}{c}
 +
t \\
 +
k
 +
\end{array}
 +
\right )
 +
\left (
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661052.png" /></td> </tr></table>
+
\frac{\lambda _ {i} }{\lambda _ {1} + \lambda _ {2} }
  
Thus, in this case the problem of testing the composite hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661053.png" /> reduces to that of testing the simple hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661054.png" />, according to which the conditional distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661055.png" /> (for a fixed sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661056.png" />) is binomial with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661057.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661058.png" />. For testing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661059.png" /> one can use, for example, the [[Sign test|sign test]].
+
\right )  ^ {k}
 +
\left (
 +
1 -
 +
\frac{\lambda _ {i} }{\lambda _ {1} + \lambda _ {2} }
  
The concept of a Neyman structure is of great significance in the problem of testing composite statistical hypotheses, since among the tests having Neyman structure there frequently is a [[Most-powerful test|most-powerful test]]. E. Lehmann and H. Scheffé have shown that a statistical test for testing a composite hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661060.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661061.png" /> has Neyman structure relative to a sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661062.png" /> if and only if the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661063.png" /> induced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066610/n06661064.png" /> is boundedly complete. On the basis of the concept of a Neyman structure general methods have been worked out for the construction of similar tests. See [[Distributions, complete family of|Distributions, complete family of]]; [[Similar test|Similar test]].
+
\right )  ^ {t-} k ,
 +
$$
 +
 
 +
$$
 +
k  =  0 \dots t .
 +
$$
 +
 
 +
When  $  H _ {0} $
 +
is valid, then  $  T $
 +
is sufficient for the unknown common value  $  \lambda = \lambda _ {1} = \lambda _ {2} $,
 +
and from (2) it follows that when  $  H _ {0} $
 +
holds, then the conditional distribution of  $  X _ {1} $
 +
for a fixed value of the sufficient statistic  $  T = t $
 +
is binomial with parameters  $  t $
 +
and  $  1 / 2 $,
 +
that is, under  $  H _ {0} $,
 +
 
 +
$$
 +
{\mathsf P} \{ X _ {1} = k \mid  T = t \}  = \
 +
\left ( \begin{array}{c}
 +
t \\
 +
k
 +
\end{array}
 +
\right )
 +
\left ( {
 +
\frac{1}{2}
 +
} \right )  ^ {t} ,\ \
 +
k = 0 \dots t .
 +
$$
 +
 
 +
Thus, in this case the problem of testing the composite hypothesis  $  H _ {0} $
 +
reduces to that of testing the simple hypothesis  $  H _ {0}  ^ {t} $,
 +
according to which the conditional distribution of  $  X _ {1} $(
 +
for a fixed sum  $  X _ {1} + X _ {2} = t $)
 +
is binomial with parameters  $  t $
 +
and  $  1 / 2 $.
 +
For testing  $  H _ {0}  ^ {t} $
 +
one can use, for example, the [[Sign test|sign test]].
 +
 
 +
The concept of a Neyman structure is of great significance in the problem of testing composite statistical hypotheses, since among the tests having Neyman structure there frequently is a [[Most-powerful test|most-powerful test]]. E. Lehmann and H. Scheffé have shown that a statistical test for testing a composite hypothesis $  H _ {0} $:  
 +
$  \theta \in \Theta _ {0} $
 +
has Neyman structure relative to a sufficient statistic $  T $
 +
if and only if the family $  \{ {P _  \theta  ^ {T} } : {\theta \in \Theta _ {0} } \} $
 +
induced by $  T $
 +
is boundedly complete. On the basis of the concept of a Neyman structure general methods have been worked out for the construction of similar tests. See [[Distributions, complete family of|Distributions, complete family of]]; [[Similar test|Similar test]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Neyman,  "Current problems of mathematical statistics" , ''Proc. Internat. Congress Mathematicians (Amsterdam, 1954)'' , '''1''' , Noordhoff &amp; North-Holland  (1957)  pp. 349–370</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Neyman,  "Current problems of mathematical statistics" , ''Proc. Internat. Congress Mathematicians (Amsterdam, 1954)'' , '''1''' , Noordhoff &amp; North-Holland  (1957)  pp. 349–370</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR></table>

Revision as of 08:02, 6 June 2020


A structure determined by a statistic that is independent of a sufficient statistic. The concept was introduced by J. Neyman (see [1]) in connection with the problem of constructing similar tests (cf. Similar test) in the theory of statistical hypothesis testing, and the term "Neyman structure" is used when referring to the structure of a statistical test if its critical function has Neyman structure. Suppose that in the realization of a random variable $ X $ taking values in a sample space $ ( \mathfrak X , \mathfrak B , P _ \theta ) $, $ \theta \in \Theta $, it is required to verify a composite hypothesis $ H _ {0} $: $ \theta \in \Theta _ {0} \subset \Theta $ and that for the family $ \{ {P _ \theta } : {\theta \in \Theta _ {0} } \} $ there exists a sufficient statistic $ T $ with distribution in the family $ \{ {P _ {e} ^ {T} } : {e \in \Theta _ {0} } \} $. Then any statistical test of level $ \alpha $ intended for testing $ H _ {0} $ has Neyman structure if its critical function $ \phi $ satisfies the condition:

$$ \tag{1 } {\mathsf E} \{ \phi ( X) \mid T = t \} = \alpha $$

almost everywhere with respect to the measure $ P _ \theta ^ {T} $, $ \theta \in \Theta _ {0} $. Evidently, if a statistical test has Neyman structure, then it is similar (cf. Similar test) relative to the family $ \{ {P _ \theta } : {\theta \in \Theta _ {0} } \} $, since

$$ {\mathsf E} _ \theta \{ \phi ( X) \} = \ {\mathsf E} _ \theta \{ {\mathsf E} \{ \phi ( X) \mid T = t \} \} = \alpha $$

for all $ \theta \in \Theta _ {0} $.

The validity of (1) essentially reduces the problem of testing the composite hypothesis $ H _ {0} $ to that of testing $ H _ {0} $ as a simple hypothesis for every fixed value $ t $ of the sufficient statistic $ T $.

Example. Suppose that two independent random variables $ X _ {1} $ and $ X _ {2} $ are subject to Poisson laws with unknown parameters $ \lambda _ {1} $ and $ \lambda _ {2} $( cf. Poisson distribution) and that the hypothesis $ H _ {0} $: $ \lambda _ {1} = \lambda _ {2} $ is to be tested against the alternative $ H _ {1} $: $ \lambda _ {1} \neq \lambda _ {2} $. Thanks to the independence of $ X _ {1} $ and $ X _ {2} $ the statistic $ T = X _ {1} + X _ {2} $ is subject to the Poisson law with parameter $ \lambda _ {1} + \lambda _ {2} $ and the conditional distributions of $ X _ {1} $ and $ X _ {2} $ under the condition $ T = t $ are binomial with parameters $ t $, $ \lambda _ {1} / ( \lambda _ {1} + \lambda _ {2} ) $ and $ t $, $ \lambda _ {2} / ( \lambda _ {1} + \lambda _ {2} ) $, respectively, that is,

$$ \tag{2 } {\mathsf P} \{ X _ {i} = k \mid T = t \} = \ \left ( \begin{array}{c} t \\ k \end{array} \right ) \left ( \frac{\lambda _ {i} }{\lambda _ {1} + \lambda _ {2} } \right ) ^ {k} \left ( 1 - \frac{\lambda _ {i} }{\lambda _ {1} + \lambda _ {2} } \right ) ^ {t-} k , $$

$$ k = 0 \dots t . $$

When $ H _ {0} $ is valid, then $ T $ is sufficient for the unknown common value $ \lambda = \lambda _ {1} = \lambda _ {2} $, and from (2) it follows that when $ H _ {0} $ holds, then the conditional distribution of $ X _ {1} $ for a fixed value of the sufficient statistic $ T = t $ is binomial with parameters $ t $ and $ 1 / 2 $, that is, under $ H _ {0} $,

$$ {\mathsf P} \{ X _ {1} = k \mid T = t \} = \ \left ( \begin{array}{c} t \\ k \end{array} \right ) \left ( { \frac{1}{2} } \right ) ^ {t} ,\ \ k = 0 \dots t . $$

Thus, in this case the problem of testing the composite hypothesis $ H _ {0} $ reduces to that of testing the simple hypothesis $ H _ {0} ^ {t} $, according to which the conditional distribution of $ X _ {1} $( for a fixed sum $ X _ {1} + X _ {2} = t $) is binomial with parameters $ t $ and $ 1 / 2 $. For testing $ H _ {0} ^ {t} $ one can use, for example, the sign test.

The concept of a Neyman structure is of great significance in the problem of testing composite statistical hypotheses, since among the tests having Neyman structure there frequently is a most-powerful test. E. Lehmann and H. Scheffé have shown that a statistical test for testing a composite hypothesis $ H _ {0} $: $ \theta \in \Theta _ {0} $ has Neyman structure relative to a sufficient statistic $ T $ if and only if the family $ \{ {P _ \theta ^ {T} } : {\theta \in \Theta _ {0} } \} $ induced by $ T $ is boundedly complete. On the basis of the concept of a Neyman structure general methods have been worked out for the construction of similar tests. See Distributions, complete family of; Similar test.

References

[1] J. Neyman, "Current problems of mathematical statistics" , Proc. Internat. Congress Mathematicians (Amsterdam, 1954) , 1 , Noordhoff & North-Holland (1957) pp. 349–370
[2] E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)
[3] Yu.V. Linnik, "Statistical problems with nuisance parameters" , Amer. Math. Soc. (1968) (Translated from Russian)
How to Cite This Entry:
Neyman structure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Neyman_structure&oldid=14940
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article