Namespaces
Variants
Actions

Difference between revisions of "Interval estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
i0521101.png
 +
$#A+1 = 42 n = 0
 +
$#C+1 = 42 : ~/encyclopedia/old_files/data/I052/I.0502110 Interval estimator
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''for the unknown true value of a scalar parameter of a probability distribution''
 
''for the unknown true value of a scalar parameter of a probability distribution''
  
An interval belonging to the set of admissible values of the parameters, with boundaries that are functions of the results of observations subject to the given distribution. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521101.png" /> be a random variable taking values in a sample space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521102.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521103.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521104.png" /> an interval on the real axis, where the true value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521105.png" /> is unknown. An interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521106.png" /> with boundaries that are functions of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521107.png" /> being observed is called an interval estimator, or confidence interval, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521108.png" />; the number
+
An interval belonging to the set of admissible values of the parameters, with boundaries that are functions of the results of observations subject to the given distribution. Let $  X $
 +
be a random variable taking values in a sample space $  ( \mathfrak X , \mathfrak B , {\mathsf P} _  \theta  ) $,  
 +
$  \theta \in \Theta $,  
 +
$  \Theta $
 +
an interval on the real axis, where the true value of $  \theta $
 +
is unknown. An interval $  ( a _ {1} ( X) , a _ {2} ( X) ) \subseteq \Theta $
 +
with boundaries that are functions of the random variable $  X $
 +
being observed is called an interval estimator, or confidence interval, for $  \theta $;  
 +
the number
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i0521109.png" /></td> </tr></table>
+
$$
 +
= \inf _ {\theta \in \Theta } \
 +
{\mathsf P} \{ a _ {1} ( X) < \theta < a _ {2} ( X)  \mid  \theta \}
 +
$$
  
is called the confidence coefficient of this confidence interval, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211011.png" /> are called the lower, respectively, upper, confidence bounds. The concept of an interval estimator has been generalized to the more general case when it is required to estimate some function, or some value of it, depending on a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211012.png" />.
+
is called the confidence coefficient of this confidence interval, and $  a _ {1} ( X) $
 +
and $  a _ {2} ( X) $
 +
are called the lower, respectively, upper, confidence bounds. The concept of an interval estimator has been generalized to the more general case when it is required to estimate some function, or some value of it, depending on a parameter $  \theta $.
  
Suppose that on a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211013.png" /> a family of functions
+
Suppose that on a set $  T \subset  \mathbf R  ^ {1} $
 +
a family of functions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211014.png" /></td> </tr></table>
+
$$
 +
u ( \theta , \cdot )  = \
 +
( u _ {1} ( \theta , \cdot ) \dots u _ {k} ( \theta , \cdot ) ) : \
 +
T  \rightarrow  \mathfrak U  \subset  \mathbf R  ^ {k} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211015.png" /></td> </tr></table>
+
$$
 +
\theta  = ( \theta _ {1} \dots \theta _ {m} )  \in  \Theta  \subset  \mathbf R  ^ {m} ,
 +
$$
  
has been given, and suppose that it is required to estimate the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211016.png" /> corresponding to the unknown true value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211017.png" /> using the realization of a random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211018.png" /> taking values in the sample space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211021.png" />. To each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211022.png" /> corresponds a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211023.png" />, which is the image of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211024.png" /> under <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211025.png" />. By definition, a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211026.png" /> is called a confidence set for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211027.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211028.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211029.png" /> has confidence probability
+
has been given, and suppose that it is required to estimate the function $  u ( \theta , \cdot ) $
 +
corresponding to the unknown true value of $  \theta $
 +
using the realization of a random vector $  X = ( X _ {1} \dots X _ {n} ) $
 +
taking values in the sample space $  ( \mathfrak X , \mathfrak B , {\mathsf P} _  \theta  ) $,  
 +
$  \mathfrak X \subset  \mathbf R  ^ {m} $,  
 +
$  \theta \in \Theta $.  
 +
To each $  t \in T $
 +
corresponds a set $  B ( t) $,  
 +
which is the image of $  \Theta $
 +
under $  u ( \cdot , t ) : \Theta \rightarrow B ( t) \subset  \mathfrak U $.  
 +
By definition, a set $  C ( X , t ) \subset  B ( t) $
 +
is called a confidence set for $  u ( \theta , t ) $
 +
if $  u ( \theta , \cdot ) $
 +
at $  t \in T $
 +
has confidence probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211030.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ u ( \theta , t ) \in C ( X , t ) \
 +
\mid  \theta \}  = p ( \theta , t )
 +
$$
  
 
and confidence coefficient
 
and confidence coefficient
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211031.png" /></td> </tr></table>
+
$$
 +
p ( t)  = \
 +
\inf _ {\theta \in \Theta } \
 +
p ( \theta , t ) .
 +
$$
  
The totality of all confidence sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211032.png" /> forms in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211033.png" /> the confidence region <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211034.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211035.png" /> with confidence probability
+
The totality of all confidence sets $  C ( X , t ) $
 +
forms in $  \mathfrak U $
 +
the confidence region $  C ( X) $
 +
for $  u ( \Theta , \cdot ) : T \rightarrow \mathfrak U $
 +
with confidence probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211036.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ C ( X) \ni u ( \theta , \cdot ) : T
 +
\rightarrow \mathfrak U  \mid  \theta \}  = \widetilde{p}  ( \theta )
 +
$$
  
 
and confidence coefficient
 
and confidence coefficient
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211037.png" /></td> </tr></table>
+
$$
 +
= \inf _ {\theta \in \Theta }  \widetilde{p}  ( \theta ) .
 +
$$
  
Sets of the type <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211038.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211039.png" /> are called interval estimators for one value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211040.png" /> of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211041.png" /> at a point and for the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i052/i052110/i05211042.png" />, respectively.
+
Sets of the type $  C ( X , t ) $
 +
or $  C ( X) $
 +
are called interval estimators for one value $  u ( \theta , t ) $
 +
of a function $  u ( \theta , \cdot ) $
 +
at a point and for the function $  u ( \theta , \cdot ) $,  
 +
respectively.
  
 
There are several approaches to the construction of interval estimators for independent parameters of a distribution. The best known are the [[Bayesian approach|Bayesian approach]], based on the Bayes theorem, Fisher's method, based on the [[Fiducial distribution|fiducial distribution]] (for Fisher's method, see [[#References|[3]]]–[[#References|[5]]]), the [[Neyman method of confidence intervals|Neyman method of confidence intervals]] ([[#References|[5]]], [[#References|[8]]], [[#References|[9]]]), and the method proposed by L.N. Bol'shev [[#References|[6]]].
 
There are several approaches to the construction of interval estimators for independent parameters of a distribution. The best known are the [[Bayesian approach|Bayesian approach]], based on the Bayes theorem, Fisher's method, based on the [[Fiducial distribution|fiducial distribution]] (for Fisher's method, see [[#References|[3]]]–[[#References|[5]]]), the [[Neyman method of confidence intervals|Neyman method of confidence intervals]] ([[#References|[5]]], [[#References|[8]]], [[#References|[9]]]), and the method proposed by L.N. Bol'shev [[#References|[6]]].
Line 35: Line 102:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  R.A. Fisher,  "Statistical methods and scientific inference" , Hafner  (1973)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.N. Bernshtein,  "On  "fiducial"  probabilities of Fisher"  ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''5'''  (1941)  pp. 85–94  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.N. Bol'shev,  "Criticism on  "Bernshtein: On fiducial probabilities of Fisher" " , ''Colected works of S.N. Bernstein'' , '''4''' , Moscow  (1964)  pp. 566–569  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  J. Neyman,  "Silver jubilee of my dispute with Fisher"  ''J. Oper. Res. Soc. Japan'' , '''3''' :  4  (1961)  pp. 145–154</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  L.N. Bol'shev,  "On the construction of confidence limits"  ''Theor. Probab. Appl.'' , '''10'''  (1965)  pp. 173–177  ''Teor. Veroyatnost. i Primenen.'' , '''10''' :  1  (1965)  pp. 187–192</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.N. Bol'shev,  E.A. Loginov,  "Interval estimates in the presence of nuisance parameters"  ''Theor. Probab. Appl.'' , '''11'''  (1966)  pp. 82–94  ''Teor. Veryatnost. i Primenen.'' , '''11''' :  1  (1966)  pp. 94–107</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J. Neyman,  "Fiducial argument and the theory of confidence intervals"  ''Biometrika'' , '''32''' :  2  (1941)  pp. 128–150</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  J. Neyman,  "Outline of a theory of statistical estimation based on the classical theory of probability"  ''Philos. Trans. Roy. Soc. London'' , '''236'''  (1937)  pp. 333–380</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  R.A. Fisher,  "Statistical methods and scientific inference" , Hafner  (1973)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.N. Bernshtein,  "On  "fiducial"  probabilities of Fisher"  ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''5'''  (1941)  pp. 85–94  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.N. Bol'shev,  "Criticism on  "Bernshtein: On fiducial probabilities of Fisher" " , ''Colected works of S.N. Bernstein'' , '''4''' , Moscow  (1964)  pp. 566–569  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  J. Neyman,  "Silver jubilee of my dispute with Fisher"  ''J. Oper. Res. Soc. Japan'' , '''3''' :  4  (1961)  pp. 145–154</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  L.N. Bol'shev,  "On the construction of confidence limits"  ''Theor. Probab. Appl.'' , '''10'''  (1965)  pp. 173–177  ''Teor. Veroyatnost. i Primenen.'' , '''10''' :  1  (1965)  pp. 187–192</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.N. Bol'shev,  E.A. Loginov,  "Interval estimates in the presence of nuisance parameters"  ''Theor. Probab. Appl.'' , '''11'''  (1966)  pp. 82–94  ''Teor. Veryatnost. i Primenen.'' , '''11''' :  1  (1966)  pp. 94–107</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J. Neyman,  "Fiducial argument and the theory of confidence intervals"  ''Biometrika'' , '''32''' :  2  (1941)  pp. 128–150</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  J. Neyman,  "Outline of a theory of statistical estimation based on the classical theory of probability"  ''Philos. Trans. Roy. Soc. London'' , '''236'''  (1937)  pp. 333–380</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>

Latest revision as of 22:13, 5 June 2020


for the unknown true value of a scalar parameter of a probability distribution

An interval belonging to the set of admissible values of the parameters, with boundaries that are functions of the results of observations subject to the given distribution. Let $ X $ be a random variable taking values in a sample space $ ( \mathfrak X , \mathfrak B , {\mathsf P} _ \theta ) $, $ \theta \in \Theta $, $ \Theta $ an interval on the real axis, where the true value of $ \theta $ is unknown. An interval $ ( a _ {1} ( X) , a _ {2} ( X) ) \subseteq \Theta $ with boundaries that are functions of the random variable $ X $ being observed is called an interval estimator, or confidence interval, for $ \theta $; the number

$$ p = \inf _ {\theta \in \Theta } \ {\mathsf P} \{ a _ {1} ( X) < \theta < a _ {2} ( X) \mid \theta \} $$

is called the confidence coefficient of this confidence interval, and $ a _ {1} ( X) $ and $ a _ {2} ( X) $ are called the lower, respectively, upper, confidence bounds. The concept of an interval estimator has been generalized to the more general case when it is required to estimate some function, or some value of it, depending on a parameter $ \theta $.

Suppose that on a set $ T \subset \mathbf R ^ {1} $ a family of functions

$$ u ( \theta , \cdot ) = \ ( u _ {1} ( \theta , \cdot ) \dots u _ {k} ( \theta , \cdot ) ) : \ T \rightarrow \mathfrak U \subset \mathbf R ^ {k} , $$

$$ \theta = ( \theta _ {1} \dots \theta _ {m} ) \in \Theta \subset \mathbf R ^ {m} , $$

has been given, and suppose that it is required to estimate the function $ u ( \theta , \cdot ) $ corresponding to the unknown true value of $ \theta $ using the realization of a random vector $ X = ( X _ {1} \dots X _ {n} ) $ taking values in the sample space $ ( \mathfrak X , \mathfrak B , {\mathsf P} _ \theta ) $, $ \mathfrak X \subset \mathbf R ^ {m} $, $ \theta \in \Theta $. To each $ t \in T $ corresponds a set $ B ( t) $, which is the image of $ \Theta $ under $ u ( \cdot , t ) : \Theta \rightarrow B ( t) \subset \mathfrak U $. By definition, a set $ C ( X , t ) \subset B ( t) $ is called a confidence set for $ u ( \theta , t ) $ if $ u ( \theta , \cdot ) $ at $ t \in T $ has confidence probability

$$ {\mathsf P} \{ u ( \theta , t ) \in C ( X , t ) \ \mid \theta \} = p ( \theta , t ) $$

and confidence coefficient

$$ p ( t) = \ \inf _ {\theta \in \Theta } \ p ( \theta , t ) . $$

The totality of all confidence sets $ C ( X , t ) $ forms in $ \mathfrak U $ the confidence region $ C ( X) $ for $ u ( \Theta , \cdot ) : T \rightarrow \mathfrak U $ with confidence probability

$$ {\mathsf P} \{ C ( X) \ni u ( \theta , \cdot ) : T \rightarrow \mathfrak U \mid \theta \} = \widetilde{p} ( \theta ) $$

and confidence coefficient

$$ p = \inf _ {\theta \in \Theta } \widetilde{p} ( \theta ) . $$

Sets of the type $ C ( X , t ) $ or $ C ( X) $ are called interval estimators for one value $ u ( \theta , t ) $ of a function $ u ( \theta , \cdot ) $ at a point and for the function $ u ( \theta , \cdot ) $, respectively.

There are several approaches to the construction of interval estimators for independent parameters of a distribution. The best known are the Bayesian approach, based on the Bayes theorem, Fisher's method, based on the fiducial distribution (for Fisher's method, see [3][5]), the Neyman method of confidence intervals ([5], [8], [9]), and the method proposed by L.N. Bol'shev [6].

References

[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] R.A. Fisher, "Statistical methods and scientific inference" , Hafner (1973)
[3] S.N. Bernshtein, "On "fiducial" probabilities of Fisher" Izv. Akad. Nauk SSSR Ser. Mat. , 5 (1941) pp. 85–94 (In Russian) (English abstract)
[4] L.N. Bol'shev, "Criticism on "Bernshtein: On fiducial probabilities of Fisher" " , Colected works of S.N. Bernstein , 4 , Moscow (1964) pp. 566–569 (In Russian)
[5] J. Neyman, "Silver jubilee of my dispute with Fisher" J. Oper. Res. Soc. Japan , 3 : 4 (1961) pp. 145–154
[6] L.N. Bol'shev, "On the construction of confidence limits" Theor. Probab. Appl. , 10 (1965) pp. 173–177 Teor. Veroyatnost. i Primenen. , 10 : 1 (1965) pp. 187–192
[7] L.N. Bol'shev, E.A. Loginov, "Interval estimates in the presence of nuisance parameters" Theor. Probab. Appl. , 11 (1966) pp. 82–94 Teor. Veryatnost. i Primenen. , 11 : 1 (1966) pp. 94–107
[8] J. Neyman, "Fiducial argument and the theory of confidence intervals" Biometrika , 32 : 2 (1941) pp. 128–150
[9] J. Neyman, "Outline of a theory of statistical estimation based on the classical theory of probability" Philos. Trans. Roy. Soc. London , 236 (1937) pp. 333–380

Comments

References

[a1] E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)
How to Cite This Entry:
Interval estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Interval_estimator&oldid=17171
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article