Difference between revisions of "Statistics"
(Importing text file) |
(latex details) |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | s0874901.png | ||
+ | $#A+1 = 21 n = 0 | ||
+ | $#C+1 = 21 : ~/encyclopedia/old_files/data/S087/S.0807490 Statistics | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
A term used in [[Mathematical statistics|mathematical statistics]] as a name for functions of the results of observations. | A term used in [[Mathematical statistics|mathematical statistics]] as a name for functions of the results of observations. | ||
− | Let a random variable | + | Let a random variable $ X $ |
+ | take values in the sample space $ ( \mathfrak X, {\mathcal B}, {\mathsf P} ^ {X} ) $. | ||
+ | Any $ {\mathcal B} $- | ||
+ | measurable mapping $ T( \cdot ) $ | ||
+ | from $ \mathfrak X $ | ||
+ | onto a measurable space $ ( \mathfrak Y, {\mathcal A} ) $ | ||
+ | is then called a statistic, and the probability distribution of the statistic $ T $ | ||
+ | is defined by the formula | ||
− | + | $$ | |
+ | {\mathsf P} ^ {T} \{ B \} = {\mathsf P} \{ T( X) \in B \} = \ | ||
+ | {\mathsf P} \{ X \in T ^ {- 1 }( B) \} = | ||
+ | $$ | ||
− | + | $$ | |
+ | = \ | ||
+ | {\mathsf P} ^ {X} \{ T ^ {- 1} ( B) \} \ \ | ||
+ | (\forall B \in {\mathcal A}). | ||
+ | $$ | ||
===Examples.=== | ===Examples.=== | ||
+ | 1) Let $ X _ {1} \dots X _ {n} $ | ||
+ | be independent identically-distributed random variables which have a variance. The statistics | ||
− | 1 | + | $$ |
− | + | \overline{X}\; = | |
− | + | \frac{1}{n} | |
+ | \sum _ {i=1} ^ { n } X _ {i} \ \textrm{ and } \ \ | ||
+ | s ^ {2} = | ||
+ | \frac{1}{(} | ||
+ | n- 1) \sum _{i=1} ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} | ||
+ | $$ | ||
− | are then unbiased estimators for the mathematical expectation | + | are then unbiased estimators for the mathematical expectation $ {\mathsf E} X _ {1} $ |
+ | and the variance $ {\mathsf D} X _ {1} $, | ||
+ | respectively. | ||
2) The terms of the [[Variational series|variational series]] (series of order statistics, cf. [[Order statistic|Order statistic]]) | 2) The terms of the [[Variational series|variational series]] (series of order statistics, cf. [[Order statistic|Order statistic]]) | ||
− | + | $$ | |
+ | X _ {(} 1) \leq \dots \leq X _ {(} n) , | ||
+ | $$ | ||
− | constructed from the observations | + | constructed from the observations $ X _ {1} \dots X _ {n} $, |
+ | are statistics. | ||
− | 3) Let the random variables | + | 3) Let the random variables $ X _ {1} \dots X _ {n} $ |
+ | form a [[Stationary stochastic process|stationary stochastic process]] with [[Spectral density|spectral density]] $ f( \cdot ) $. | ||
+ | In this case the statistic | ||
− | + | $$ | |
+ | I _ {n} ( \lambda ) = | ||
+ | \frac{1}{2 \pi n } | ||
+ | \left | \sum _{k=1} ^ { n } X _ {k} e ^ | ||
+ | {- ik \lambda } \right | ^ {2} ,\ \ | ||
+ | \lambda \in [- \pi , \pi ], | ||
+ | $$ | ||
− | called the [[Periodogram|periodogram]], is an asymptotically-unbiased estimator for | + | called the [[Periodogram|periodogram]], is an asymptotically-unbiased estimator for $ f( \cdot ) $, |
+ | given certain specific conditions of regularity on $ f( \cdot ) $, | ||
+ | i.e. | ||
− | + | $$ | |
+ | \lim\limits _ {n \rightarrow \infty } {\mathsf E} I _ {n} ( \lambda ) = \ | ||
+ | f( \lambda ),\ \ | ||
+ | \lambda \in [- \pi , \pi ]. | ||
+ | $$ | ||
In the theory of estimation and statistical hypotheses testing, great importance is attached to the concept of a [[Sufficient statistic|sufficient statistic]], which brings about a reduction of data without any loss of information on the (parametric) family of distributions under consideration. | In the theory of estimation and statistical hypotheses testing, great importance is attached to the concept of a [[Sufficient statistic|sufficient statistic]], which brings about a reduction of data without any loss of information on the (parametric) family of distributions under consideration. | ||
====References==== | ====References==== | ||
− | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1988)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> V.G. Voinov, M.S. Nikulin, "Unbiased estimates and their applications" , Moscow (1989) (In Russian)</TD></TR></table> | + | <table> |
+ | <TR><TD valign="top">[1]</TD> <TD valign="top"> E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1988)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> V.G. Voinov, M.S. Nikulin, "Unbiased estimates and their applications" , Moscow (1989) (In Russian)</TD></TR> | ||
+ | </table> |
Latest revision as of 16:24, 6 January 2024
A term used in mathematical statistics as a name for functions of the results of observations.
Let a random variable $ X $ take values in the sample space $ ( \mathfrak X, {\mathcal B}, {\mathsf P} ^ {X} ) $. Any $ {\mathcal B} $- measurable mapping $ T( \cdot ) $ from $ \mathfrak X $ onto a measurable space $ ( \mathfrak Y, {\mathcal A} ) $ is then called a statistic, and the probability distribution of the statistic $ T $ is defined by the formula
$$ {\mathsf P} ^ {T} \{ B \} = {\mathsf P} \{ T( X) \in B \} = \ {\mathsf P} \{ X \in T ^ {- 1 }( B) \} = $$
$$ = \ {\mathsf P} ^ {X} \{ T ^ {- 1} ( B) \} \ \ (\forall B \in {\mathcal A}). $$
Examples.
1) Let $ X _ {1} \dots X _ {n} $ be independent identically-distributed random variables which have a variance. The statistics
$$ \overline{X}\; = \frac{1}{n} \sum _ {i=1} ^ { n } X _ {i} \ \textrm{ and } \ \ s ^ {2} = \frac{1}{(} n- 1) \sum _{i=1} ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} $$
are then unbiased estimators for the mathematical expectation $ {\mathsf E} X _ {1} $ and the variance $ {\mathsf D} X _ {1} $, respectively.
2) The terms of the variational series (series of order statistics, cf. Order statistic)
$$ X _ {(} 1) \leq \dots \leq X _ {(} n) , $$
constructed from the observations $ X _ {1} \dots X _ {n} $, are statistics.
3) Let the random variables $ X _ {1} \dots X _ {n} $ form a stationary stochastic process with spectral density $ f( \cdot ) $. In this case the statistic
$$ I _ {n} ( \lambda ) = \frac{1}{2 \pi n } \left | \sum _{k=1} ^ { n } X _ {k} e ^ {- ik \lambda } \right | ^ {2} ,\ \ \lambda \in [- \pi , \pi ], $$
called the periodogram, is an asymptotically-unbiased estimator for $ f( \cdot ) $, given certain specific conditions of regularity on $ f( \cdot ) $, i.e.
$$ \lim\limits _ {n \rightarrow \infty } {\mathsf E} I _ {n} ( \lambda ) = \ f( \lambda ),\ \ \lambda \in [- \pi , \pi ]. $$
In the theory of estimation and statistical hypotheses testing, great importance is attached to the concept of a sufficient statistic, which brings about a reduction of data without any loss of information on the (parametric) family of distributions under consideration.
References
[1] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1988) |
[2] | V.G. Voinov, M.S. Nikulin, "Unbiased estimates and their applications" , Moscow (1989) (In Russian) |
Statistics. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Statistics&oldid=14854