Namespaces
Variants
Actions

Difference between revisions of "Covariance matrix"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
The matrix formed from the pairwise covariances of several random variables; more precisely, for the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268201.png" />-dimensional vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268202.png" /> the covariance matrix is the square matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268203.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268204.png" /> is the vector of mean values. The components of the covariance matrix are:
+
<!--
 +
c0268201.png
 +
$#A+1 = 27 n = 0
 +
$#C+1 = 27 : ~/encyclopedia/old_files/data/C026/C.0206820 Covariance matrix
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268205.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268206.png" /></td> </tr></table>
+
The matrix formed from the pairwise covariances of several random variables; more precisely, for the  $  k $-
 +
dimensional vector  $  X = ( X _ {1} \dots X _ {k} ) $
 +
the covariance matrix is the square matrix  $  \Sigma = {\mathsf E} [ ( X - {\mathsf E} X ) ( X - {\mathsf E} X )  ^ {T} ] $,
 +
where  $  {\mathsf E} X = ( {\mathsf E} X _ {1} \dots {\mathsf E} X _ {k} ) $
 +
is the vector of mean values. The components of the covariance matrix are:
  
and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268207.png" /> they are the same as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268208.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c0268209.png" />) (that is, the variances of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682010.png" /> lie on the principal diagonal). The covariance matrix is a symmetric positive semi-definite matrix. If the covariance matrix is positive definite, then the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682011.png" /> is non-degenerate; otherwise it is degenerate. For the random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682012.png" /> the covariance matrix plays the same role as the variance of a random variable. If the variances of the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682013.png" /> are all equal to 1, then the covariance matrix of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682014.png" /> is the same as the [[Correlation matrix|correlation matrix]].
+
$$
 +
\sigma _ {ij}  = {\mathsf E}
 +
[ ( X _ {i} - {\mathsf E} X _ {i} ) ( X _ {j} - {\mathsf E} X _ {j} ) ]
 +
= \
 +
\mathop{\rm cov} ( X _ {i} , X _ {j} ) ,
 +
$$
  
The sample covariance matrix for the sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682015.png" />, where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682017.png" />, are independent and identically-distributed random <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682018.png" />-dimensional vectors, consists of the variance and covariance estimators:
+
$$
 +
i , = 1 \dots k ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682019.png" /></td> </tr></table>
+
and for  $  i = j $
 +
they are the same as  $  {\mathsf D} X _ {i} $(
 +
= \mathop{\rm var} ( X _ {i} ) $)
 +
(that is, the variances of the  $  X _ {i} $
 +
lie on the principal diagonal). The covariance matrix is a symmetric positive semi-definite matrix. If the covariance matrix is positive definite, then the distribution of  $  X $
 +
is non-degenerate; otherwise it is degenerate. For the random vector  $  X $
 +
the covariance matrix plays the same role as the variance of a random variable. If the variances of the random variables  $  X _ {1} \dots X _ {k} $
 +
are all equal to 1, then the covariance matrix of  $  X = ( X _ {1} \dots X _ {k} ) $
 +
is the same as the [[Correlation matrix|correlation matrix]].
  
where the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682020.png" /> is the arithmetic mean of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682021.png" />. If the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682022.png" /> are multivariate normally distributed with covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682023.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682024.png" /> is the maximum-likelihood estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682025.png" />; in this case the joint distribution of the elements of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682026.png" /> is called the [[Wishart distribution|Wishart distribution]]; it is one of the fundamental distributions in multivariate statistical analysis by means of which hypotheses concerning the covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026820/c02682027.png" /> can be tested.
+
The sample covariance matrix for the sample  $  X  ^ {(} 1) \dots X  ^ {(} n) $,
 +
where the $  X  ^ {(} m) $,
 +
$  m = 1 \dots n $,
 +
are independent and identically-distributed random  $  k $-
 +
dimensional vectors, consists of the variance and covariance estimators:
 +
 
 +
$$
 +
=
 +
\frac{1}{n-}
 +
1
 +
\sum _ { m= } 1 ^ { n }
 +
( X  ^ {(} m) - \overline{X}\; ) ( X  ^ {(} m) - \overline{X}\; )  ^ {T} ,
 +
$$
 +
 
 +
where the vector  $  \overline{X}\; $
 +
is the arithmetic mean of the $  X  ^ {(} 1) \dots X  ^ {(} n) $.  
 +
If the $  X  ^ {(} 1) \dots X  ^ {(} n) $
 +
are multivariate normally distributed with covariance matrix $  \Sigma $,  
 +
then $  S ( n - 1 ) / n $
 +
is the maximum-likelihood estimator of $  \Sigma $;  
 +
in this case the joint distribution of the elements of the matrix $  ( n - 1 ) S $
 +
is called the [[Wishart distribution|Wishart distribution]]; it is one of the fundamental distributions in multivariate statistical analysis by means of which hypotheses concerning the covariance matrix $  \Sigma $
 +
can be tested.

Revision as of 17:31, 5 June 2020


The matrix formed from the pairwise covariances of several random variables; more precisely, for the $ k $- dimensional vector $ X = ( X _ {1} \dots X _ {k} ) $ the covariance matrix is the square matrix $ \Sigma = {\mathsf E} [ ( X - {\mathsf E} X ) ( X - {\mathsf E} X ) ^ {T} ] $, where $ {\mathsf E} X = ( {\mathsf E} X _ {1} \dots {\mathsf E} X _ {k} ) $ is the vector of mean values. The components of the covariance matrix are:

$$ \sigma _ {ij} = {\mathsf E} [ ( X _ {i} - {\mathsf E} X _ {i} ) ( X _ {j} - {\mathsf E} X _ {j} ) ] = \ \mathop{\rm cov} ( X _ {i} , X _ {j} ) , $$

$$ i , j = 1 \dots k , $$

and for $ i = j $ they are the same as $ {\mathsf D} X _ {i} $( $ = \mathop{\rm var} ( X _ {i} ) $) (that is, the variances of the $ X _ {i} $ lie on the principal diagonal). The covariance matrix is a symmetric positive semi-definite matrix. If the covariance matrix is positive definite, then the distribution of $ X $ is non-degenerate; otherwise it is degenerate. For the random vector $ X $ the covariance matrix plays the same role as the variance of a random variable. If the variances of the random variables $ X _ {1} \dots X _ {k} $ are all equal to 1, then the covariance matrix of $ X = ( X _ {1} \dots X _ {k} ) $ is the same as the correlation matrix.

The sample covariance matrix for the sample $ X ^ {(} 1) \dots X ^ {(} n) $, where the $ X ^ {(} m) $, $ m = 1 \dots n $, are independent and identically-distributed random $ k $- dimensional vectors, consists of the variance and covariance estimators:

$$ S = \frac{1}{n-} 1 \sum _ { m= } 1 ^ { n } ( X ^ {(} m) - \overline{X}\; ) ( X ^ {(} m) - \overline{X}\; ) ^ {T} , $$

where the vector $ \overline{X}\; $ is the arithmetic mean of the $ X ^ {(} 1) \dots X ^ {(} n) $. If the $ X ^ {(} 1) \dots X ^ {(} n) $ are multivariate normally distributed with covariance matrix $ \Sigma $, then $ S ( n - 1 ) / n $ is the maximum-likelihood estimator of $ \Sigma $; in this case the joint distribution of the elements of the matrix $ ( n - 1 ) S $ is called the Wishart distribution; it is one of the fundamental distributions in multivariate statistical analysis by means of which hypotheses concerning the covariance matrix $ \Sigma $ can be tested.

How to Cite This Entry:
Covariance matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Covariance_matrix&oldid=13365
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article