Namespaces
Variants
Actions

Difference between revisions of "Processing of observations"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
The application of mathematical methods to results of observations, in order to form conclusions about the true values of unknown quantities. Any observational result arising in some way from measurements involves errors of various origins. Errors are divided into three groups: gross errors, systematic errors and random errors (concerning gross errors see [[Errors, theory of|Errors, theory of]]; in the rest of this article it will be assumed that the observations involve no gross errors). The result <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750101.png" /> of a measurement of some quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750102.png" /> is usually assumed to be a random variable; the measurement error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750103.png" /> is then also a random variable. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750104.png" /> be its [[Mathematical expectation|mathematical expectation]]. Then
+
<!--
 +
p0750101.png
 +
$#A+1 = 42 n = 0
 +
$#C+1 = 42 : ~/encyclopedia/old_files/data/P075/P.0705010 Processing of observations
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750105.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750106.png" /> is called the systematic error, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750107.png" /> the random error; the expectation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750108.png" /> equals zero. The systematic error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p0750109.png" /> is frequently known in advance and is then easily eliminated. For example, in astronomy, when the angle between the direction of a star and the plane of the horizon is being measured, the systematic error is the sum of two errors: a systematic error originating in the instrument at the reading of the specific angle (the instrumental error) and a systematic error due to the refraction of light rays in the atmosphere. The instrumental error may be determined by consulting a correction table or graph for the instrument; the error due to refraction (for zenith distances less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501010.png" />) may be calculated theoretically with adequate accuracy.
+
The application of mathematical methods to results of observations, in order to form conclusions about the true values of unknown quantities. Any observational result arising in some way from measurements involves errors of various origins. Errors are divided into three groups: gross errors, systematic errors and random errors (concerning gross errors see [[Errors, theory of|Errors, theory of]]; in the rest of this article it will be assumed that the observations involve no gross errors). The result  $  Y $
 +
of a measurement of some quantity  $  \mu $
 +
is usually assumed to be a random variable; the measurement error $  \delta = Y - \mu $
 +
is then also a random variable. Let  $  b = {\mathsf E} \delta $
 +
be its [[Mathematical expectation|mathematical expectation]]. Then
  
The effect of random errors is estimated by methods of the theory of errors. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501011.png" /> are the results of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501012.png" /> independent measurements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501013.png" />, carried out under identical conditions and by identical means, one usually puts
+
$$
 +
= \mu + b + ( \delta - b).
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
The quantity  $  b $
 +
is called the systematic error, and  $  \delta - b $
 +
the random error; the expectation of  $  \delta - b $
 +
equals zero. The systematic error  $  b $
 +
is frequently known in advance and is then easily eliminated. For example, in astronomy, when the angle between the direction of a star and the plane of the horizon is being measured, the systematic error is the sum of two errors: a systematic error originating in the instrument at the reading of the specific angle (the instrumental error) and a systematic error due to the refraction of light rays in the atmosphere. The instrumental error may be determined by consulting a correction table or graph for the instrument; the error due to refraction (for zenith distances less than  $  80 \circ $)  
 +
may be calculated theoretically with adequate accuracy.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501015.png" /> is the systematic error.
+
The effect of random errors is estimated by methods of the theory of errors. If  $  Y _ {1} \dots Y _ {n} $
 +
are the results of  $  n $
 +
independent measurements of  $  \mu $,
 +
carried out under identical conditions and by identical means, one usually puts
  
If it is required to compute the value of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501016.png" /> at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501017.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501018.png" /> is estimated on the basis of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501019.png" /> independent observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501020.png" />, one approximates the required value by
+
$$ \tag{1 }
 +
\mu  \approx  \overline{Y}\; - b  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\frac{Y _ {1} + \dots + Y _ {n} }{n}
 +
- b,
 +
$$
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501022.png" /> be the expectation of
+
where  $  b $
 +
is the systematic error.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501023.png" /></td> </tr></table>
+
If it is required to compute the value of a function  $  f ( y) $
 +
at a point  $  y = \mu $,
 +
where  $  \mu $
 +
is estimated on the basis of  $  n $
 +
independent observations  $  Y _ {1} \dots Y _ {n} $,
 +
one approximates the required value by
 +
 
 +
$$ \tag{2 }
 +
f ( \mu )  \approx  f ( \overline{Y}\; - b).
 +
$$
 +
 
 +
Let  $  B $
 +
be the expectation of
 +
 
 +
$$
 +
\Delta  = f ( \overline{Y}\; - b) - f ( \mu );
 +
$$
  
 
then
 
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501024.png" /></td> </tr></table>
+
$$
 +
f ( \overline{Y}\; - b)  = f ( \mu ) + B + ( \Delta - B).
 +
$$
  
Therefore <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501025.png" /> is the systematic error and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501026.png" /> the random error of the approximation (2). If the random errors in the independent observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501027.png" /> obey the same distribution and if the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501028.png" /> is  "nearly"  linear in a neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501029.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501030.png" /> and
+
Therefore $  B $
 +
is the systematic error and $  \Delta - B $
 +
the random error of the approximation (2). If the random errors in the independent observations $  Y _ {1} \dots Y _ {n} $
 +
obey the same distribution and if the function $  f ( y) $
 +
is  "nearly"  linear in a neighbourhood of $  y = \mu $,  
 +
then $  B \approx 0 $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501031.png" /></td> </tr></table>
+
$$
 +
\Delta  \approx  f ^ { \prime } ( \mu ) \overline{ {( \delta - b) }}\; ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501032.png" /> is the [[Arithmetic mean|arithmetic mean]] of the random errors of the initial observations. This means that if
+
where $  \overline{ {( \delta - b) }}\; $
 +
is the [[Arithmetic mean|arithmetic mean]] of the random errors of the initial observations. This means that if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501033.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} ( \delta _ {i} - b)  ^ {2}  = \sigma  ^ {2} ,\ \
 +
i = 1 \dots n,
 +
$$
  
 
then
 
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501034.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} ( \Delta - B)  ^ {2}  \approx \
 +
{\mathsf E} \Delta  ^ {2}  \approx \
 +
 
 +
\frac{[ f ^ { \prime } ( \mu )]  ^ {2} \sigma  ^ {2} }{n}
 +
  \rightarrow  0
 +
$$
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501035.png" />.
+
as $  n \rightarrow \infty $.
  
 
In the case of several unknown parameters, the observations are often processed using the method of least squares (cf. [[Least squares, method of|Least squares, method of]]).
 
In the case of several unknown parameters, the observations are often processed using the method of least squares (cf. [[Least squares, method of|Least squares, method of]]).
  
If one is studying the dependence between two random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501037.png" /> on the basis of a sequence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501038.png" /> independent observations, each of which is a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501040.png" />, subject to the (unknown) joint distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501041.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p075/p075010/p07501042.png" />, one uses the theory of [[Correlation|correlation]] to process the observations.
+
If one is studying the dependence between two random variables $  X $
 +
and $  Y $
 +
on the basis of a sequence of $  n $
 +
independent observations, each of which is a vector $  ( X _ {i} , Y _ {i} ) $,  
 +
$  i = 1 \dots n $,  
 +
subject to the (unknown) joint distribution of $  X $
 +
and $  Y $,  
 +
one uses the theory of [[Correlation|correlation]] to process the observations.
  
 
Whenever processing observations, one must make certain assumptions about the nature of the functional dependence, the distribution of the random errors, etc. It is therefore necessary to check the agreement between such assumptions and the results of the observations (both those actually used and others). See [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]].
 
Whenever processing observations, one must make certain assumptions about the nature of the functional dependence, the distribution of the random errors, etc. It is therefore necessary to check the agreement between such assumptions and the results of the observations (both those actually used and others). See [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]].

Latest revision as of 08:07, 6 June 2020


The application of mathematical methods to results of observations, in order to form conclusions about the true values of unknown quantities. Any observational result arising in some way from measurements involves errors of various origins. Errors are divided into three groups: gross errors, systematic errors and random errors (concerning gross errors see Errors, theory of; in the rest of this article it will be assumed that the observations involve no gross errors). The result $ Y $ of a measurement of some quantity $ \mu $ is usually assumed to be a random variable; the measurement error $ \delta = Y - \mu $ is then also a random variable. Let $ b = {\mathsf E} \delta $ be its mathematical expectation. Then

$$ Y = \mu + b + ( \delta - b). $$

The quantity $ b $ is called the systematic error, and $ \delta - b $ the random error; the expectation of $ \delta - b $ equals zero. The systematic error $ b $ is frequently known in advance and is then easily eliminated. For example, in astronomy, when the angle between the direction of a star and the plane of the horizon is being measured, the systematic error is the sum of two errors: a systematic error originating in the instrument at the reading of the specific angle (the instrumental error) and a systematic error due to the refraction of light rays in the atmosphere. The instrumental error may be determined by consulting a correction table or graph for the instrument; the error due to refraction (for zenith distances less than $ 80 \circ $) may be calculated theoretically with adequate accuracy.

The effect of random errors is estimated by methods of the theory of errors. If $ Y _ {1} \dots Y _ {n} $ are the results of $ n $ independent measurements of $ \mu $, carried out under identical conditions and by identical means, one usually puts

$$ \tag{1 } \mu \approx \overline{Y}\; - b = \ \frac{Y _ {1} + \dots + Y _ {n} }{n} - b, $$

where $ b $ is the systematic error.

If it is required to compute the value of a function $ f ( y) $ at a point $ y = \mu $, where $ \mu $ is estimated on the basis of $ n $ independent observations $ Y _ {1} \dots Y _ {n} $, one approximates the required value by

$$ \tag{2 } f ( \mu ) \approx f ( \overline{Y}\; - b). $$

Let $ B $ be the expectation of

$$ \Delta = f ( \overline{Y}\; - b) - f ( \mu ); $$

then

$$ f ( \overline{Y}\; - b) = f ( \mu ) + B + ( \Delta - B). $$

Therefore $ B $ is the systematic error and $ \Delta - B $ the random error of the approximation (2). If the random errors in the independent observations $ Y _ {1} \dots Y _ {n} $ obey the same distribution and if the function $ f ( y) $ is "nearly" linear in a neighbourhood of $ y = \mu $, then $ B \approx 0 $ and

$$ \Delta \approx f ^ { \prime } ( \mu ) \overline{ {( \delta - b) }}\; , $$

where $ \overline{ {( \delta - b) }}\; $ is the arithmetic mean of the random errors of the initial observations. This means that if

$$ {\mathsf E} ( \delta _ {i} - b) ^ {2} = \sigma ^ {2} ,\ \ i = 1 \dots n, $$

then

$$ {\mathsf E} ( \Delta - B) ^ {2} \approx \ {\mathsf E} \Delta ^ {2} \approx \ \frac{[ f ^ { \prime } ( \mu )] ^ {2} \sigma ^ {2} }{n} \rightarrow 0 $$

as $ n \rightarrow \infty $.

In the case of several unknown parameters, the observations are often processed using the method of least squares (cf. Least squares, method of).

If one is studying the dependence between two random variables $ X $ and $ Y $ on the basis of a sequence of $ n $ independent observations, each of which is a vector $ ( X _ {i} , Y _ {i} ) $, $ i = 1 \dots n $, subject to the (unknown) joint distribution of $ X $ and $ Y $, one uses the theory of correlation to process the observations.

Whenever processing observations, one must make certain assumptions about the nature of the functional dependence, the distribution of the random errors, etc. It is therefore necessary to check the agreement between such assumptions and the results of the observations (both those actually used and others). See Statistical hypotheses, verification of.

References

[1] E. Whittaker, G. Robinson, "The calculus of observations" , Blackie (1944)
[2] Yu.V. Linnik, "Methode der kleinste Quadraten in moderner Darstellung" , Deutsch. Verlag Wissenschaft. (1961) (Translated from Russian)
How to Cite This Entry:
Processing of observations. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Processing_of_observations&oldid=18464
This article was adapted from an original article by L.N. Bol'shev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article