# Neyman method of confidence intervals

One of the methods of confidence estimation, which makes it possible to obtain interval estimators (cf. Interval estimator) for unknown parameters of probability laws from results of observations. It was proposed and developed by J. Neyman (see [1], [2]). The essence of the method consists in the following. Let $ X _ {1} \dots X _ {n} $
be random variables whose joint distribution function $ F ( x , \theta ) $
depends on a parameter $ \theta \in \Theta \subset \mathbf R ^ {1} $,
$ x = ( x _ {1} \dots x _ {n} ) \in \mathbf R ^ {n} $.
Suppose, next, that as point estimator of the parameter $ \theta $
a statistic $ T = T ( X _ {1} \dots X _ {n} ) $
is used with distribution function $ G ( t , \theta ) $,
$ \theta \in \Theta $.
Then for any number $ P $
in the interval $ 0.5 < P < 1 $
one can define a system of two equations in $ \theta $:

$$ \tag{* } G ( T , \theta ) = \ \left \{ \begin{array}{l} P , \\ 1 - P . \\ \end{array} \right .$$

Under certain regularity conditions on $ F ( x , \theta ) $, which in almost-all cases of practical interest are satisfied, the system (*) has a unique solution

$$ \underline \theta = \ \underline \theta ( T ) ,\ \ \overline \theta \; = \ \overline \theta \; ( T ) ,\ \ \overline \theta \; , \underline \theta \in \Theta , $$

such that

$$ {\mathsf P} \{ \underline \theta < \theta < \overline \theta \; \mid \theta \} \geq 2 P - 1 . $$

The set $ ( \underline \theta , \overline \theta \; ) \subset \Theta $ is called the confidence interval (confidence estimator) for the unknown parameter $ \theta $ with confidence probability $ 2P - 1 $. The statistics $ \underline \theta $ and $ \overline \theta \; $ are called the lower and upper confidence bounds corresponding to the chosen confidence coefficient $ P $. In turn, the number

$$ p = \inf _ {\theta \in \Theta } {\mathsf P} \{ \underline \theta < \theta < \overline \theta \; \mid \theta \} $$

is called the confidence coefficient of the confidence interval $ ( \underline \theta , \overline \theta \; ) $. Thus, Neyman's method of confidence intervals leads to interval estimators with confidence coefficient $ p \geq 2P - 1 $.

Example 1. Suppose that independent random variables $ X _ {1} \dots X _ {n} $ are subject to one and the same normal law $ \Phi ( x - \theta ) $ whose mathematical expectation $ \theta $ is not known (cf. Normal distribution). Then the best estimator for $ \theta $ is the sufficient statistic $ \overline{X}\; = \sum _ {i=} 1 ^ {n} X _ {i} / n $, which is distributed according to the normal law $ \Phi [ \sqrt n ( x - \theta ) ] $. Fixing $ P $ in $ 0.5 < P < 1 $ and solving the equations

$$ \Phi [ \sqrt n ( \overline{X}\; - \theta ) ] = P ,\ \ \Phi [ \sqrt n ( \overline{X}\; - \theta ) ] = 1 - P , $$

one finds the lower and upper confidence bounds

$$ \underline \theta = \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P) ,\ \ \overline \theta \; = \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( 1- P ) $$

corresponding to the chosen confidence coefficient $ P $. Since

$$ \Phi ^ {-} 1 ( y) + \Phi ^ {-} 1 ( 1- y ) \equiv 0 ,\ \ y \in [ 0 , 1 ] , $$

the confidence interval for the unknown mathematical expectation $ \theta $ of the normal law $ \Phi ( x - \theta ) $ has the form

$$ \left ( \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P),\ \overline{X}\; + \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P) \right ) , $$

and its confidence coefficient is precisely $ 2P - 1 $.

Example 2. Let $ \mu $ be a random variable subject to the binomial law with parameters $ n $ and $ \theta $( cf. Binomial distribution), that is, for any integer $ m = 0 \dots n $,

$$ {\mathsf P} \{ \mu \leq m \mid n , \theta \} = \ \sum _ { k= } 0 ^ { m } \left ( \begin{array}{c} n \\ k \end{array} \right ) \theta ^ {k} ( 1- \theta ) ^ {n-} k = $$

$$ = \ I _ {1 - \theta } ( n- m , m+ 1 ) ,\ 0 < \theta < 1 , $$

where

$$ I _ {x} ( a , b ) = \ \frac{1}{B ( a , b ) } \int\limits _ { 0 } ^ { x } t ^ {a-} 1 ( 1- t ) ^ {b-} 1 dt $$

is the incomplete beta-function ( $ 0 \leq x \leq 1 $, $ a > 0 $, $ b > 0 $). If the "success" parameter $ \theta $ is not known, then to determine the confidence bounds one has to solve, in accordance with Neyman's method of confidence intervals, the equations

$$ I _ {1 - \theta } ( n - \mu , \mu + 1 ) = \ \left \{ \begin{array}{l} P , \\ 1 - P , \\ \end{array} \right .$$

where $ 0.5 < P < 1 $. From tables of mathematical statistics the roots $ \overline \theta \; $ and $ \underline \theta $ of these equations are determined, which are the upper and lower confidence bounds, respectively, with confidence coefficient $ P $. The coefficient of the resulting confidence interval $ ( \underline \theta , \overline \theta \; ) $ is precisely $ 2P - 1 $. Obviously, if an experiment gives $ \mu = 0 $, then $ \underline \theta = 0 $, and if $ \mu = n $, then $ \overline \theta \; = 1 $.

Neyman's method of confidence intervals differs substantially from the Bayesian method (cf. Bayesian approach) and the method based on Fisher's fiducial approach (cf. Fiducial distribution). In it the unknown parameter $ \theta $ of the distribution function $ F ( x , \theta ) $ is treated as a constant quantity, and the confidence interval $ ( \underline \theta ( T) , \overline \theta \; ( T) ) $ is constructed from an experiment in the course of which the value of the statistic $ T $ is calculated. Consequently, according to Neyman's method of confidence intervals, the probability for $ \underline \theta < \theta < \overline \theta \; $ to hold is the a priori probability for the fact that the confidence interval $ ( \underline \theta , \overline \theta \; ) $" covers" the unknown true value of the parameter $ \theta $. In fact, Neyman's confidence method remains valid if $ \theta $ is a random variable, because in the method the interval estimator is constructed from carrying out an experiment and consequently does not depend on the a priori distribution of the parameter. Neyman's method differs advantageously from the Bayesian and the fiducial approach by being independent of a priori information about the parameter $ \theta $ and so, in contrast to Fisher's method, is logically sound. In general, Neyman's method leads to a whole system of confidence intervals for the unknown parameter, and in this context arises the problem of constructing an optimal interval estimator having, for example, the properties of being unbiased, accurate or similar, which can be solved within the framework of the theory of statistical hypothesis testing.

#### References

[1] | J. Neyman, "On the problem of confidence intervals" Ann. Math. Stat. , 6 (1935) pp. 111–116 |

[2] | J. Neyman, "Outline of a theory of statistical estimation based on the classical theory of probability" Philos. Trans. Roy. Soc. London. Ser. A. , 236 (1937) pp. 333–380 |

[3] | L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova) |

[4] | L.N. Bol'shev, "On the construction of confidence limits" Theor. Probab. Appl. , 10 (1965) pp. 173–177 Teor. Veroyatnost. i Primenen. , 10 : 1 (1965) pp. 187–192 |

[5] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986) |

**How to Cite This Entry:**

Neyman method of confidence intervals.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Neyman_method_of_confidence_intervals&oldid=53128