Namespaces
Variants
Actions

Difference between revisions of "Errors, theory of"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (dead link removed)
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
e0362401.png
 +
$#A+1 = 59 n = 0
 +
$#C+1 = 59 : ~/encyclopedia/old_files/data/E036/E.0306240 Errors, theory of
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The branch of [[Mathematical statistics|mathematical statistics]] devoted to the inference of accurate conclusions about the numerical values of approximately measured quantities, as well as on the errors in the measurements. Repeated measurements of one and the same constant quantity generally give different results, since every measurement contains a certain error. There are three basic types of error: systematic, gross and random. Systematic errors always either overestimate or underestimate the results of measurements and arise for specific reasons (incorrect set-up of measuring equipment, the effect of environment, etc.), which systematically affect the measurements and alter them in one direction. The estimation of systematic errors is achieved using methods which go beyond the confines of mathematical statistics (see [[Processing of observations|Processing of observations]]). Gross errors (often called outliers) arise from miscalculations, incorrect reading of the measuring equipment, etc. The results of measurements which contain gross errors differ greatly from other results of measurements and are therefore often easy to identify. Random errors arise from various reasons which have an unforeseen effect on each of the measurements, both in overestimating and in underestimating results.
 
The branch of [[Mathematical statistics|mathematical statistics]] devoted to the inference of accurate conclusions about the numerical values of approximately measured quantities, as well as on the errors in the measurements. Repeated measurements of one and the same constant quantity generally give different results, since every measurement contains a certain error. There are three basic types of error: systematic, gross and random. Systematic errors always either overestimate or underestimate the results of measurements and arise for specific reasons (incorrect set-up of measuring equipment, the effect of environment, etc.), which systematically affect the measurements and alter them in one direction. The estimation of systematic errors is achieved using methods which go beyond the confines of mathematical statistics (see [[Processing of observations|Processing of observations]]). Gross errors (often called outliers) arise from miscalculations, incorrect reading of the measuring equipment, etc. The results of measurements which contain gross errors differ greatly from other results of measurements and are therefore often easy to identify. Random errors arise from various reasons which have an unforeseen effect on each of the measurements, both in overestimating and in underestimating results.
  
The theory of errors is only concerned with the study of gross and random errors. The basic problems of the theory of errors are to study the distribution laws of random errors, to seek estimates (see [[Statistical estimator|Statistical estimator]]) of unknown parameters using the results of measurements, to establish the errors in these estimates, and to identify gross errors. Let the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362401.png" /> be obtained as a result of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362402.png" /> independent, equally accurate measurements of a certain unknown variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362403.png" />. The differences
+
The theory of errors is only concerned with the study of gross and random errors. The basic problems of the theory of errors are to study the distribution laws of random errors, to seek estimates (see [[Statistical estimator|Statistical estimator]]) of unknown parameters using the results of measurements, to establish the errors in these estimates, and to identify gross errors. Let the values $  Y _ {1} \dots Y _ {n} $
 +
be obtained as a result of $  n $
 +
independent, equally accurate measurements of a certain unknown variable $  \mu $.  
 +
The differences
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362404.png" /></td> </tr></table>
+
$$
 +
\delta _ {1}  = Y _ {1} - \mu \dots \delta _ {n}  = Y _ {n} - \mu ,
 +
$$
  
are called the true errors. In terms of the probability theory of errors all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362405.png" /> are treated as random variables; independence of measurements is understood to be mutual independence of the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362406.png" />. The equal accuracy of the measurements is treated broadly as an identical distribution: The true errors of equally accurate measurements are identically distributed random variables. The [[Mathematical expectation|mathematical expectation]] of the true errors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362407.png" /> is then called the systematic error, while the differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362408.png" /> are called the random errors. Thus, the absence of a systematic error means that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e0362409.png" />, and, in this situation, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624010.png" /> are random errors. The variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624011.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624012.png" /> is the [[Standard deviation|standard deviation]], is called the measure of accuracy (when a systematic error occurs, the measure of accuracy is expressed by the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624013.png" />). Equal accuracy of measurements is understood in a narrow sense to mean the equality of the measure of accuracy for all the results of the measurements. The incidence of gross errors signifies the disruption of equal accuracy (both in the broad and narrow sense) for certain specific measurements. As an estimator of the unknown value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624014.png" /> one usually takes the arithmetic mean from the results of the measurements:
+
are called the true errors. In terms of the probability theory of errors all $  \delta _ {i} $
 +
are treated as random variables; independence of measurements is understood to be mutual independence of the random variables $  \delta _ {1} \dots \delta _ {n} $.  
 +
The equal accuracy of the measurements is treated broadly as an identical distribution: The true errors of equally accurate measurements are identically distributed random variables. The [[Mathematical expectation|mathematical expectation]] of the true errors $  b = {\mathsf E} \delta _ {i} = \dots = {\mathsf E} \delta _ {n} $
 +
is then called the systematic error, while the differences $  \delta _ {1} - b \dots \delta _ {n} - b $
 +
are called the random errors. Thus, the absence of a systematic error means that $  b = 0 $,  
 +
and, in this situation, $  \delta _ {1} \dots \delta _ {n} $
 +
are random errors. The variable $  1/ \sigma \sqrt 2 $,  
 +
where $  \sigma $
 +
is the [[Standard deviation|standard deviation]], is called the measure of accuracy (when a systematic error occurs, the measure of accuracy is expressed by the relation $  1/ \sqrt {2( b  ^ {2} + \sigma  ^ {2} ) } $).  
 +
Equal accuracy of measurements is understood in a narrow sense to mean the equality of the measure of accuracy for all the results of the measurements. The incidence of gross errors signifies the disruption of equal accuracy (both in the broad and narrow sense) for certain specific measurements. As an estimator of the unknown value $  \mu $
 +
one usually takes the arithmetic mean from the results of the measurements:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624015.png" /></td> </tr></table>
+
$$
 +
\overline{Y}\; =
 +
\frac{1}{n}
 +
\sum _ { i= } 1 ^ { n }  Y _ {i} ,
 +
$$
  
while the differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624016.png" /> are called the apparent errors. The choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624017.png" /> as an estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624018.png" /> is based on the fact that for any sufficiently large number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624019.png" /> of equally accurate measurements with no systematic error, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624020.png" />, with probability arbitrarily close to one, differs by an arbitrarily small amount from the unknown variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624021.png" /> (see [[Law of large numbers|Law of large numbers]]); <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624022.png" /> is free of systematic errors (estimators with this property are called unbiased, cf. also [[Unbiased estimator|Unbiased estimator]]), and its variance is
+
while the differences $  \Delta _ {1} = Y _ {1} - \overline{Y}\; \dots \Delta _ {n} = Y _ {n} - \overline{Y}\; $
 +
are called the apparent errors. The choice of $  \overline{Y}\; $
 +
as an estimator for $  \mu $
 +
is based on the fact that for any sufficiently large number $  n $
 +
of equally accurate measurements with no systematic error, $  \overline{Y}\; $,  
 +
with probability arbitrarily close to one, differs by an arbitrarily small amount from the unknown variable $  \mu $(
 +
see [[Law of large numbers|Law of large numbers]]); $  \overline{Y}\; $
 +
is free of systematic errors (estimators with this property are called unbiased, cf. also [[Unbiased estimator|Unbiased estimator]]), and its variance is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624023.png" /></td> </tr></table>
+
$$
 +
{\mathsf D} \overline{Y}\; = {\mathsf E} ( \overline{Y}\; - \mu )  ^ {2}  =
 +
\frac{\sigma  ^ {2} }{n}
 +
.
 +
$$
  
Experience has shown that in practice the random errors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624024.png" /> are very often subject to almost normal distributions (the reasons for this are revealed in the so-called [[Limit theorems|limit theorems]] of probability theory). In this case the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624025.png" /> has an almost normal distribution with mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624026.png" /> and variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624027.png" />. If the distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624028.png" /> are exactly normal, then the variance of every other unbiased estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624029.png" />, for example the median (cf. [[Median (in statistics)|Median (in statistics)]]), is not less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624030.png" />. If the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624031.png" /> is not normal, then the latter property need not hold (see the example under [[Rao–Cramér inequality|Rao–Cramér inequality]]).
+
Experience has shown that in practice the random errors $  \delta _ {i} $
 +
are very often subject to almost normal distributions (the reasons for this are revealed in the so-called [[Limit theorems|limit theorems]] of probability theory). In this case the variable $  \overline{Y}\; $
 +
has an almost normal distribution with mathematical expectation $  \mu $
 +
and variance $  \sigma  ^ {2} /n $.  
 +
If the distributions of $  \delta _ {i} $
 +
are exactly normal, then the variance of every other unbiased estimator for $  \mu $,  
 +
for example the median (cf. [[Median (in statistics)|Median (in statistics)]]), is not less than $  {\mathsf D} \overline{Y}\; $.  
 +
If the distribution of $  \delta _ {i} $
 +
is not normal, then the latter property need not hold (see the example under [[Rao–Cramér inequality|Rao–Cramér inequality]]).
  
If the variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624032.png" /> of separate measurements is previously unknown, then the variable
+
If the variance $  \sigma  ^ {2} $
 +
of separate measurements is previously unknown, then the variable
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624033.png" /></td> </tr></table>
+
$$
 +
s  ^ {2}  =
 +
\frac{1}{n-}
 +
1 \sum _ { i= } 1 ^ { n }  \Delta _ {i}  ^ {2}
 +
$$
  
is used as estimator for it (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624034.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624035.png" /> is an unbiased estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624036.png" />). If the random errors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624037.png" /> have a normal distribution, then the relation
+
is used as estimator for it ( $  {\mathsf E} s  ^ {2} = \sigma  ^ {2} $,  
 +
i.e. $  s  ^ {2} $
 +
is an unbiased estimator for $  \sigma  ^ {2} $).  
 +
If the random errors $  \delta _ {i} $
 +
have a normal distribution, then the relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624038.png" /></td> </tr></table>
+
$$
 +
=
 +
\frac{( \overline{Y}\; - \mu Z) \sqrt n }{s}
  
is subject to the [[Student distribution|Student distribution]] with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624039.png" /> degrees of freedom. This can be used to estimate the error of the approximate equality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624040.png" /> (see [[Least squares, method of|Least squares, method of]]).
+
$$
  
The variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624041.png" /> has under the same assumptions a [[Chi-squared distribution| "chi-squared"  distribution]] with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624042.png" /> degrees of freedom. This enables one to estimate the error of the approximate equality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624043.png" />. It can be demonstrated that the relative error will not exceed the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624044.png" /> with probability
+
is subject to the [[Student distribution|Student distribution]] with $  n- 1 $
 +
degrees of freedom. This can be used to estimate the error of the approximate equality $  \mu \approx \overline{Y}\; $(
 +
see [[Least squares, method of|Least squares, method of]]).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624045.png" /></td> </tr></table>
+
The variable  $  ( n- 1) s  ^ {2} / \sigma  ^ {2} $
 +
has under the same assumptions a [[Chi-squared distribution| "chi-squared" distribution]] with  $  n- 1 $
 +
degrees of freedom. This enables one to estimate the error of the approximate equality  $  \sigma \approx s $.  
 +
It can be demonstrated that the relative error will not exceed the number  $  q $
 +
with probability
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624046.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624047.png" />-distribution function,
+
$$
 +
\omega  = F( z _ {2} , n- 1) - F( z _ {1} , n- 1),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624048.png" /></td> </tr></table>
+
where  $  F( z, n- 1) $
 +
is the  $  \chi  ^ {2} $-
 +
distribution function,
  
If certain measurements contain gross errors, then the above rules for estimating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624049.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624050.png" /> will give distorted results. It is therefore very important to be able to differentiate measurements which contain gross errors from those which are subject only to random errors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624051.png" />. For the case where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624052.png" /> are independent and have an identical normal distribution, a comprehensive method for identifying measurements which contain gross errors was proposed by N.V. Smirnov [[#References|[3]]].
+
$$
 +
z _ {1}  = 
 +
\frac{\sqrt n- 1 }{1+}
 +
q ,\ \
 +
z _ {2}  = 
 +
\frac{\sqrt n- 1 }{1-}
 +
q .
 +
$$
 +
 
 +
If certain measurements contain gross errors, then the above rules for estimating $  \mu $
 +
and $  \sigma $
 +
will give distorted results. It is therefore very important to be able to differentiate measurements which contain gross errors from those which are subject only to random errors $  \delta _ {i} $.  
 +
For the case where $  \delta _ {i} $
 +
are independent and have an identical normal distribution, a comprehensive method for identifying measurements which contain gross errors was proposed by N.V. Smirnov [[#References|[3]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  Yu.V. Linnik,  "Methode der kleinste Quadraten in moderner Darstellung" , Deutsch. Verlag Wissenschaft.  (1961)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.V. Smirnov,  "On the estimation of the maximum term in a series of observations"  ''Dokl. Akad. Nauk SSSR'' , '''33''' :  5  (1941)  pp. 346–349  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  Yu.V. Linnik,  "Methode der kleinste Quadraten in moderner Darstellung" , Deutsch. Verlag Wissenschaft.  (1961)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.V. Smirnov,  "On the estimation of the maximum term in a series of observations"  ''Dokl. Akad. Nauk SSSR'' , '''33''' :  5  (1941)  pp. 346–349  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
Modern developments in the treatment of errors include robust estimation and outlier detection and treatment.
 
Modern developments in the treatment of errors include robust estimation and outlier detection and treatment.
  
The intuitive definition of an outlier is: an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism (than the one under observation). This includes errors such as can arise for instance because data were copied incorrectly. Outliers are dangerous for many statistical procedures. One way to deal with outliers is to use outlier tests to accept or reject the hypothesis that an observation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624053.png" /> belongs to a sample from a random variable or not. The observations for which this hypothesis is rejected are then removed. Other methods for dealing with outliers include censoring, the use of robust methods and Winsorization (which is robust), of which the underlying idea is to move all excessively outlying observations in some systematic way to a position near the more central observations. One mechanism which may cause (apparent) outliers is when the data come from a heavy tailed distribution. Another one is when the data come from two distributions: a basic one which yields  "good"  observations and a second contaminating distribution. Which treatment of outliers is appropriate depends of course heavily on the mechanism generating the outliers. A selection of references on outliers and methods for treating them is [[#References|[a1]]]–.
+
The intuitive definition of an outlier is: an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism (than the one under observation). This includes errors such as can arise for instance because data were copied incorrectly. Outliers are dangerous for many statistical procedures. One way to deal with outliers is to use outlier tests to accept or reject the hypothesis that an observation $  x  ^ {*} $
 +
belongs to a sample from a random variable or not. The observations for which this hypothesis is rejected are then removed. Other methods for dealing with outliers include censoring, the use of robust methods and Winsorization (which is robust), of which the underlying idea is to move all excessively outlying observations in some systematic way to a position near the more central observations. One mechanism which may cause (apparent) outliers is when the data come from a heavy tailed distribution. Another one is when the data come from two distributions: a basic one which yields  "good"  observations and a second contaminating distribution. Which treatment of outliers is appropriate depends of course heavily on the mechanism generating the outliers. A selection of references on outliers and methods for treating them is [[#References|[a1]]]–.
  
Robust statistics in a loose sense tries to deal with the fact that many often made assumptions such as normality, linearity, independence<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624054.png" /> are at best approximations to the real situation. Thus, one looks for tests, statistical procedures<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624055.png" /> which are, for instance, insensitive to the assumption that the underlying distribution is normal. Let the statistical model being used be, e.g., a parametrized family of distributions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624056.png" /> conceived of as lying in a larger space of distributions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624057.png" />. One main aspect of robust statistics is then the study of the effects of deformations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624058.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036240/e03624059.png" /> on the various statistical procedures being used. Similar concerns have motivated the study of deformations in other parts of mathematics, for instance deformations of dynamical systems. More generally, robust statistics is concerned with statistical concepts which describe the behaviour of statistical procedures not only under parametric models but also in the neighbourhoods of such models.
+
Robust statistics in a loose sense tries to deal with the fact that many often made assumptions such as normality, linearity, independence $  \dots $
 +
are at best approximations to the real situation. Thus, one looks for tests, statistical procedures $  \dots $
 +
which are, for instance, insensitive to the assumption that the underlying distribution is normal. Let the statistical model being used be, e.g., a parametrized family of distributions $  F ( \theta ) $
 +
conceived of as lying in a larger space of distributions $  S $.  
 +
One main aspect of robust statistics is then the study of the effects of deformations of $  F ( \theta ) $
 +
in $  S $
 +
on the various statistical procedures being used. Similar concerns have motivated the study of deformations in other parts of mathematics, for instance deformations of dynamical systems. More generally, robust statistics is concerned with statistical concepts which describe the behaviour of statistical procedures not only under parametric models but also in the neighbourhoods of such models.
  
 
Robust statistics is by now is large active field. A selection of books on the topic is [[#References|[a5]]]–[[#References|[a7]]].
 
Robust statistics is by now is large active field. A selection of books on the topic is [[#References|[a5]]]–[[#References|[a7]]].

Revision as of 19:37, 5 June 2020


The branch of mathematical statistics devoted to the inference of accurate conclusions about the numerical values of approximately measured quantities, as well as on the errors in the measurements. Repeated measurements of one and the same constant quantity generally give different results, since every measurement contains a certain error. There are three basic types of error: systematic, gross and random. Systematic errors always either overestimate or underestimate the results of measurements and arise for specific reasons (incorrect set-up of measuring equipment, the effect of environment, etc.), which systematically affect the measurements and alter them in one direction. The estimation of systematic errors is achieved using methods which go beyond the confines of mathematical statistics (see Processing of observations). Gross errors (often called outliers) arise from miscalculations, incorrect reading of the measuring equipment, etc. The results of measurements which contain gross errors differ greatly from other results of measurements and are therefore often easy to identify. Random errors arise from various reasons which have an unforeseen effect on each of the measurements, both in overestimating and in underestimating results.

The theory of errors is only concerned with the study of gross and random errors. The basic problems of the theory of errors are to study the distribution laws of random errors, to seek estimates (see Statistical estimator) of unknown parameters using the results of measurements, to establish the errors in these estimates, and to identify gross errors. Let the values $ Y _ {1} \dots Y _ {n} $ be obtained as a result of $ n $ independent, equally accurate measurements of a certain unknown variable $ \mu $. The differences

$$ \delta _ {1} = Y _ {1} - \mu \dots \delta _ {n} = Y _ {n} - \mu , $$

are called the true errors. In terms of the probability theory of errors all $ \delta _ {i} $ are treated as random variables; independence of measurements is understood to be mutual independence of the random variables $ \delta _ {1} \dots \delta _ {n} $. The equal accuracy of the measurements is treated broadly as an identical distribution: The true errors of equally accurate measurements are identically distributed random variables. The mathematical expectation of the true errors $ b = {\mathsf E} \delta _ {i} = \dots = {\mathsf E} \delta _ {n} $ is then called the systematic error, while the differences $ \delta _ {1} - b \dots \delta _ {n} - b $ are called the random errors. Thus, the absence of a systematic error means that $ b = 0 $, and, in this situation, $ \delta _ {1} \dots \delta _ {n} $ are random errors. The variable $ 1/ \sigma \sqrt 2 $, where $ \sigma $ is the standard deviation, is called the measure of accuracy (when a systematic error occurs, the measure of accuracy is expressed by the relation $ 1/ \sqrt {2( b ^ {2} + \sigma ^ {2} ) } $). Equal accuracy of measurements is understood in a narrow sense to mean the equality of the measure of accuracy for all the results of the measurements. The incidence of gross errors signifies the disruption of equal accuracy (both in the broad and narrow sense) for certain specific measurements. As an estimator of the unknown value $ \mu $ one usually takes the arithmetic mean from the results of the measurements:

$$ \overline{Y}\; = \frac{1}{n} \sum _ { i= } 1 ^ { n } Y _ {i} , $$

while the differences $ \Delta _ {1} = Y _ {1} - \overline{Y}\; \dots \Delta _ {n} = Y _ {n} - \overline{Y}\; $ are called the apparent errors. The choice of $ \overline{Y}\; $ as an estimator for $ \mu $ is based on the fact that for any sufficiently large number $ n $ of equally accurate measurements with no systematic error, $ \overline{Y}\; $, with probability arbitrarily close to one, differs by an arbitrarily small amount from the unknown variable $ \mu $( see Law of large numbers); $ \overline{Y}\; $ is free of systematic errors (estimators with this property are called unbiased, cf. also Unbiased estimator), and its variance is

$$ {\mathsf D} \overline{Y}\; = {\mathsf E} ( \overline{Y}\; - \mu ) ^ {2} = \frac{\sigma ^ {2} }{n} . $$

Experience has shown that in practice the random errors $ \delta _ {i} $ are very often subject to almost normal distributions (the reasons for this are revealed in the so-called limit theorems of probability theory). In this case the variable $ \overline{Y}\; $ has an almost normal distribution with mathematical expectation $ \mu $ and variance $ \sigma ^ {2} /n $. If the distributions of $ \delta _ {i} $ are exactly normal, then the variance of every other unbiased estimator for $ \mu $, for example the median (cf. Median (in statistics)), is not less than $ {\mathsf D} \overline{Y}\; $. If the distribution of $ \delta _ {i} $ is not normal, then the latter property need not hold (see the example under Rao–Cramér inequality).

If the variance $ \sigma ^ {2} $ of separate measurements is previously unknown, then the variable

$$ s ^ {2} = \frac{1}{n-} 1 \sum _ { i= } 1 ^ { n } \Delta _ {i} ^ {2} $$

is used as estimator for it ( $ {\mathsf E} s ^ {2} = \sigma ^ {2} $, i.e. $ s ^ {2} $ is an unbiased estimator for $ \sigma ^ {2} $). If the random errors $ \delta _ {i} $ have a normal distribution, then the relation

$$ t = \frac{( \overline{Y}\; - \mu Z) \sqrt n }{s} $$

is subject to the Student distribution with $ n- 1 $ degrees of freedom. This can be used to estimate the error of the approximate equality $ \mu \approx \overline{Y}\; $( see Least squares, method of).

The variable $ ( n- 1) s ^ {2} / \sigma ^ {2} $ has under the same assumptions a "chi-squared" distribution with $ n- 1 $ degrees of freedom. This enables one to estimate the error of the approximate equality $ \sigma \approx s $. It can be demonstrated that the relative error will not exceed the number $ q $ with probability

$$ \omega = F( z _ {2} , n- 1) - F( z _ {1} , n- 1), $$

where $ F( z, n- 1) $ is the $ \chi ^ {2} $- distribution function,

$$ z _ {1} = \frac{\sqrt n- 1 }{1+} q ,\ \ z _ {2} = \frac{\sqrt n- 1 }{1-} q . $$

If certain measurements contain gross errors, then the above rules for estimating $ \mu $ and $ \sigma $ will give distorted results. It is therefore very important to be able to differentiate measurements which contain gross errors from those which are subject only to random errors $ \delta _ {i} $. For the case where $ \delta _ {i} $ are independent and have an identical normal distribution, a comprehensive method for identifying measurements which contain gross errors was proposed by N.V. Smirnov [3].

References

[1] Yu.V. Linnik, "Methode der kleinste Quadraten in moderner Darstellung" , Deutsch. Verlag Wissenschaft. (1961) (Translated from Russian)
[2] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)
[3] N.V. Smirnov, "On the estimation of the maximum term in a series of observations" Dokl. Akad. Nauk SSSR , 33 : 5 (1941) pp. 346–349 (In Russian)

Comments

Modern developments in the treatment of errors include robust estimation and outlier detection and treatment.

The intuitive definition of an outlier is: an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism (than the one under observation). This includes errors such as can arise for instance because data were copied incorrectly. Outliers are dangerous for many statistical procedures. One way to deal with outliers is to use outlier tests to accept or reject the hypothesis that an observation $ x ^ {*} $ belongs to a sample from a random variable or not. The observations for which this hypothesis is rejected are then removed. Other methods for dealing with outliers include censoring, the use of robust methods and Winsorization (which is robust), of which the underlying idea is to move all excessively outlying observations in some systematic way to a position near the more central observations. One mechanism which may cause (apparent) outliers is when the data come from a heavy tailed distribution. Another one is when the data come from two distributions: a basic one which yields "good" observations and a second contaminating distribution. Which treatment of outliers is appropriate depends of course heavily on the mechanism generating the outliers. A selection of references on outliers and methods for treating them is [a1]–.

Robust statistics in a loose sense tries to deal with the fact that many often made assumptions such as normality, linearity, independence $ \dots $ are at best approximations to the real situation. Thus, one looks for tests, statistical procedures $ \dots $ which are, for instance, insensitive to the assumption that the underlying distribution is normal. Let the statistical model being used be, e.g., a parametrized family of distributions $ F ( \theta ) $ conceived of as lying in a larger space of distributions $ S $. One main aspect of robust statistics is then the study of the effects of deformations of $ F ( \theta ) $ in $ S $ on the various statistical procedures being used. Similar concerns have motivated the study of deformations in other parts of mathematics, for instance deformations of dynamical systems. More generally, robust statistics is concerned with statistical concepts which describe the behaviour of statistical procedures not only under parametric models but also in the neighbourhoods of such models.

Robust statistics is by now is large active field. A selection of books on the topic is [a5][a7].

References

[a1] Th.S. Ferguson, "Rules for rejection of outliers" Rev. Inst. Int. Stat. , 29 (1961) pp. 29–43
[a2] D.M. Hawkins, "Identification of outliers" , Chapman & Hall (1980)
[a3] W.J. Dixon, "Simplified estimation from censored normal samples" Ann. Math. Stat. , 31 (1960) pp. 385–391
[a4a] A.E. Sarhan, B.G. Greenberg, "Estimation of location and scale parameters by order statistics from singly and doubly censored samples I" Ann. Math. Stat. , 27 (1956) pp. 427–451
[a4b] A.E. Sarhan, B.G. Greenberg, "Estimation of location and scale parameters by order statistics from singly and doubly censored samples II" Ann. Math. Stat. , 29 (1958) pp. 79–105
[a5] P.J. Huber, "Robust statistics" , Wiley (1981)
[a6] F.R. Hampel, E.M. Ronchetti, P.J. Rousseeuw, W.A. Stahel, "Robust statistics. The approach based on influence functions" , Wiley (1986)
[a7] W.J.J. Rey, "Introduction to robust and quasi-robust statistical methods" , Springer (1983)
[a8] W.T. Federer, "Statistics and society. Data collection and interpretation" , M. Dekker (1973)
How to Cite This Entry:
Errors, theory of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Errors,_theory_of&oldid=28547
This article was adapted from an original article by L.N. Bol'shev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article