Difference between revisions of "Covariance matrix"
(Importing text file) |
(latex details) |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | c0268201.png | ||
+ | $#A+1 = 27 n = 0 | ||
+ | $#C+1 = 27 : ~/encyclopedia/old_files/data/C026/C.0206820 Covariance matrix | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | The matrix formed from the pairwise covariances of several random variables; more precisely, for the $ k $- | |
+ | dimensional vector $ X = ( X _ {1} \dots X _ {k} ) $ | ||
+ | the covariance matrix is the square matrix $ \Sigma = {\mathsf E} [ ( X - {\mathsf E} X ) ( X - {\mathsf E} X ) ^ {T} ] $, | ||
+ | where $ {\mathsf E} X = ( {\mathsf E} X _ {1} \dots {\mathsf E} X _ {k} ) $ | ||
+ | is the vector of mean values. The components of the covariance matrix are: | ||
− | + | $$ | |
+ | \sigma _ {ij} = {\mathsf E} | ||
+ | [ ( X _ {i} - {\mathsf E} X _ {i} ) ( X _ {j} - {\mathsf E} X _ {j} ) ] | ||
+ | = \ | ||
+ | \mathop{\rm cov} ( X _ {i} , X _ {j} ) , | ||
+ | $$ | ||
− | + | $$ | |
+ | i , j = 1 \dots k , | ||
+ | $$ | ||
− | + | and for $ i = j $ | |
+ | they are the same as $ {\mathsf D} X _ {i} $( | ||
+ | $ = \mathop{\rm var} ( X _ {i} ) $) | ||
+ | (that is, the variances of the $ X _ {i} $ | ||
+ | lie on the principal diagonal). The covariance matrix is a symmetric positive semi-definite matrix. If the covariance matrix is positive definite, then the distribution of $ X $ | ||
+ | is non-degenerate; otherwise it is degenerate. For the random vector $ X $ | ||
+ | the covariance matrix plays the same role as the variance of a random variable. If the variances of the random variables $ X _ {1} \dots X _ {k} $ | ||
+ | are all equal to 1, then the covariance matrix of $ X = ( X _ {1} \dots X _ {k} ) $ | ||
+ | is the same as the [[Correlation matrix|correlation matrix]]. | ||
− | where the | + | The sample covariance matrix for the sample $ X ^ {(1)} \dots X ^ {(n)} $, |
+ | where the $ X ^ {(m)} $, | ||
+ | $ m = 1 \dots n $, | ||
+ | are independent and identically-distributed random $ k $- | ||
+ | dimensional vectors, consists of the variance and covariance estimators: | ||
+ | |||
+ | $$ | ||
+ | S = | ||
+ | \frac{1}{n-1}\sum_{m=1}^n | ||
+ | ( X ^ {(m)} - \overline{X}\; ) ( X ^ {(m)} - \overline{X}\; ) ^ {T} , | ||
+ | $$ | ||
+ | |||
+ | where the vector $ \overline{X}\; $ | ||
+ | is the arithmetic mean of the $ X ^ {(1)} \dots X ^ {(n)} $. | ||
+ | If the $ X ^ {(1)} \dots X ^ {(n)} $ | ||
+ | are multivariate normally distributed with covariance matrix $ \Sigma $, | ||
+ | then $ S ( n - 1 ) / n $ | ||
+ | is the maximum-likelihood estimator of $ \Sigma $; | ||
+ | in this case the joint distribution of the elements of the matrix $ ( n - 1 ) S $ | ||
+ | is called the [[Wishart distribution]]; it is one of the fundamental distributions in multivariate statistical analysis by means of which hypotheses concerning the covariance matrix $ \Sigma $ | ||
+ | can be tested. |
Latest revision as of 16:42, 20 January 2024
The matrix formed from the pairwise covariances of several random variables; more precisely, for the $ k $-
dimensional vector $ X = ( X _ {1} \dots X _ {k} ) $
the covariance matrix is the square matrix $ \Sigma = {\mathsf E} [ ( X - {\mathsf E} X ) ( X - {\mathsf E} X ) ^ {T} ] $,
where $ {\mathsf E} X = ( {\mathsf E} X _ {1} \dots {\mathsf E} X _ {k} ) $
is the vector of mean values. The components of the covariance matrix are:
$$ \sigma _ {ij} = {\mathsf E} [ ( X _ {i} - {\mathsf E} X _ {i} ) ( X _ {j} - {\mathsf E} X _ {j} ) ] = \ \mathop{\rm cov} ( X _ {i} , X _ {j} ) , $$
$$ i , j = 1 \dots k , $$
and for $ i = j $ they are the same as $ {\mathsf D} X _ {i} $( $ = \mathop{\rm var} ( X _ {i} ) $) (that is, the variances of the $ X _ {i} $ lie on the principal diagonal). The covariance matrix is a symmetric positive semi-definite matrix. If the covariance matrix is positive definite, then the distribution of $ X $ is non-degenerate; otherwise it is degenerate. For the random vector $ X $ the covariance matrix plays the same role as the variance of a random variable. If the variances of the random variables $ X _ {1} \dots X _ {k} $ are all equal to 1, then the covariance matrix of $ X = ( X _ {1} \dots X _ {k} ) $ is the same as the correlation matrix.
The sample covariance matrix for the sample $ X ^ {(1)} \dots X ^ {(n)} $, where the $ X ^ {(m)} $, $ m = 1 \dots n $, are independent and identically-distributed random $ k $- dimensional vectors, consists of the variance and covariance estimators:
$$ S = \frac{1}{n-1}\sum_{m=1}^n ( X ^ {(m)} - \overline{X}\; ) ( X ^ {(m)} - \overline{X}\; ) ^ {T} , $$
where the vector $ \overline{X}\; $ is the arithmetic mean of the $ X ^ {(1)} \dots X ^ {(n)} $. If the $ X ^ {(1)} \dots X ^ {(n)} $ are multivariate normally distributed with covariance matrix $ \Sigma $, then $ S ( n - 1 ) / n $ is the maximum-likelihood estimator of $ \Sigma $; in this case the joint distribution of the elements of the matrix $ ( n - 1 ) S $ is called the Wishart distribution; it is one of the fundamental distributions in multivariate statistical analysis by means of which hypotheses concerning the covariance matrix $ \Sigma $ can be tested.
Covariance matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Covariance_matrix&oldid=13365