Namespaces
Variants
Actions

Difference between revisions of "Factor analysis"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A branch of multi-dimensional statistical analysis that brings together mathematical and statistical methods for reducing the dimension of a multi-dimensional indicator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380601.png" /> under investigation. That is, for constructing — by investigating the structure of the correlations between the components <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380602.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380603.png" /> — models that enable one to establish (within some random error of prognosis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380604.png" />) the values of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380605.png" /> analyzable components of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380606.png" /> from a substantially smaller number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380607.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380608.png" />, the so-called general (not immediately observable) factors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f0380609.png" />.
+
<!--
 +
f0380601.png
 +
$#A+1 = 49 n = 0
 +
$#C+1 = 49 : ~/encyclopedia/old_files/data/F038/F.0308060 Factor analysis
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
A branch of multi-dimensional statistical analysis that brings together mathematical and statistical methods for reducing the dimension of a multi-dimensional indicator $  \mathbf x = ( x _ {1} \dots x _ {p} )  ^  \prime  $
 +
under investigation. That is, for constructing — by investigating the structure of the correlations between the components $  x _ {i} , x _ {j} $,
 +
$  i , j = 1 \dots p $—  
 +
models that enable one to establish (within some random error of prognosis $  \epsilon $)  
 +
the values of the $  p $
 +
analyzable components of $  \mathbf x $
 +
from a substantially smaller number $  m $,  
 +
$  m \ll  p $,  
 +
the so-called general (not immediately observable) factors $  \mathbf f = ( f _ {1} \dots f _ {m} )  ^  \prime  $.
  
 
The simplest version of the formalization of a problem posed like this is provided by the linear normal model of factor analysis with orthogonal general factors and uncorrelated residuals:
 
The simplest version of the formalization of a problem posed like this is provided by the linear normal model of factor analysis with orthogonal general factors and uncorrelated residuals:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806010.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
x _ {k}  = \
 +
\sum _ {j = 1 } ^ { m }
 +
q _ {kj} f _ {j} + \epsilon _ {k} ,\ \
 +
k = 1 \dots p,
 +
$$
  
 
or, in matrix notation,
 
or, in matrix notation,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806011.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1prm)</td></tr></table>
+
$$ \tag{1'}
 +
\mathbf x  = \
 +
\mathbf q \mathbf f + \pmb\epsilon ,
 +
$$
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806012.png" /> loading matrix of the coefficients of the linear transformation is called the loading matrix of the general factors for the variables in question.
+
where the $  ( p \times m) $
 +
loading matrix of the coefficients of the linear transformation is called the loading matrix of the general factors for the variables in question.
  
Assume that the vector of specific residuals (errors of prognosis) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806013.png" /> is subject to a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806014.png" />-dimensional normal distribution with zero vector of means and an unknown diagonal covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806015.png" />. The general factor vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806016.png" />, depending on the specific nature of the problem to be solved, can be interpreted either as an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806017.png" />-dimensional random variable with a covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806018.png" /> of special form, namely the unit matrix (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806019.png" />), or as a vector of unknown non-random parameters (mutually orthogonal and normalized), the values of which change from one observation to another.
+
Assume that the vector of specific residuals (errors of prognosis) $  \pmb\epsilon = ( \epsilon _ {1} \dots \epsilon _ {p} ) $
 +
is subject to a $  p $-
 +
dimensional normal distribution with zero vector of means and an unknown diagonal covariance matrix $  V _ {\pmb\epsilon }  $.  
 +
The general factor vector $  \mathbf f $,  
 +
depending on the specific nature of the problem to be solved, can be interpreted either as an $  m $-
 +
dimensional random variable with a covariance matrix $  V _ {\mathbf f }  $
 +
of special form, namely the unit matrix (that is, $  V _ {\mathbf f }  = I _ {m} $),  
 +
or as a vector of unknown non-random parameters (mutually orthogonal and normalized), the values of which change from one observation to another.
  
If it is assumed that the variables have been centred beforehand (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806020.png" />), then from (1prm), in view of the assumptions made, one immediately obtains the following relation connecting the covariance matrices of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806022.png" /> and the loading matrix:
+
If it is assumed that the variables have been centred beforehand (that is, $  {\mathsf E} \mathbf x = 0 $),  
 +
then from (1'}), in view of the assumptions made, one immediately obtains the following relation connecting the covariance matrices of the vectors $  \mathbf x $
 +
and $  \pmb\epsilon $
 +
and the loading matrix:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
V _ {\mathbf x }  = \
 +
\mathbf q \mathbf q  ^  \prime  + V _ {\pmb\epsilon }  .
 +
$$
  
In carrying out an actual statistical analysis the researcher has available only estimates of the elements of the covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806024.png" /> (obtained from the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806025.png" />), — the elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806026.png" /> of the loading matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806027.png" /> and the variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806028.png" /> of the specific residuals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806029.png" /> are unknown and remain to be determined.
+
In carrying out an actual statistical analysis the researcher has available only estimates of the elements of the covariance matrix $  V _ {\mathbf x }  $(
 +
obtained from the observations $  \mathbf x _ {1} \dots \mathbf x _ {n} $),  
 +
— the elements $  q _ {ki} $
 +
of the loading matrix $  \mathbf q $
 +
and the variance $  v _ {k k }  = {\mathsf D} \epsilon _ {k }  $
 +
of the specific residuals $  \epsilon _ {k }  $
 +
are unknown and remain to be determined.
  
 
Thus, in carrying out factor analysis, the researcher has to solve the following main problems.
 
Thus, in carrying out factor analysis, the researcher has to solve the following main problems.
  
a) Whether there exists or whether it is legitimate to use a model of type (1). Far from every covariance matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806030.png" /> can be represented in the form (2). The problem reduces to testing the hypothesis that there is a special structure of correlation between the components of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806031.png" /> in question.
+
a) Whether there exists or whether it is legitimate to use a model of type (1). Far from every covariance matrix $  V _ {\mathbf x }  $
 +
can be represented in the form (2). The problem reduces to testing the hypothesis that there is a special structure of correlation between the components of the vector $  \mathbf x $
 +
in question.
  
b) Whether a model of type (1) is unique (identifying it). The principal difficulty in computing and interpreting a model consists in the fact that for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806032.png" /> neither the structural parameters nor the factors themselves are uniquely determined. If the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806033.png" /> satisfies (2), then the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806034.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806035.png" /> is an orthogonal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806036.png" />-matrix, will also satisfy (2). One usually ascertains under what additional a priori restrictions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806038.png" /> the parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806040.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806041.png" /> of the model to be analyzed are unique. The possibility of orthogonally transforming the solution of the factor model also enables one to obtain the solution with the most natural interpretation.
+
b) Whether a model of type (1) is unique (identifying it). The principal difficulty in computing and interpreting a model consists in the fact that for $  m > 1 $
 +
neither the structural parameters nor the factors themselves are uniquely determined. If the pair $  ( \mathbf q , V _ {\pmb\epsilon }  ) $
 +
satisfies (2), then the pair $  ( \mathbf q\mathbf c , V _ {\pmb\epsilon }  ) $,  
 +
where $  \mathbf c $
 +
is an orthogonal $  ( m \times m) $-
 +
matrix, will also satisfy (2). One usually ascertains under what additional a priori restrictions on $  \mathbf q $,  
 +
$  V _ {\pmb\epsilon }  $
 +
the parameters $  \mathbf q $,  
 +
$  \mathbf f $
 +
and $  V _ {\pmb\epsilon }  $
 +
of the model to be analyzed are unique. The possibility of orthogonally transforming the solution of the factor model also enables one to obtain the solution with the most natural interpretation.
  
c) The statistical estimation (from the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806042.png" />) of the unknown structural parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806043.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806044.png" />.
+
c) The statistical estimation (from the observations $  \mathbf x _ {1} \dots \mathbf x _ {n} $)  
 +
of the unknown structural parameters $  \mathbf q $
 +
and $  V _ {\pmb\epsilon }  $.
  
d) The statistical testing of a series of hypotheses concerning the nature of the model (linearity, non-linearity, etc.) and the values of its structural parameters, such as a hypothesis on the true number of general factors, a hypothesis on the adequacy of the chosen model in relation to the available observed results, a hypothesis on the statistical significance of the difference from zero of the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806045.png" />, etc.
+
d) The statistical testing of a series of hypotheses concerning the nature of the model (linearity, non-linearity, etc.) and the values of its structural parameters, such as a hypothesis on the true number of general factors, a hypothesis on the adequacy of the chosen model in relation to the available observed results, a hypothesis on the statistical significance of the difference from zero of the coefficients $  q _ {ij} $,  
 +
etc.
  
e) The construction of statistical estimators for the unobservable values of the general factors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806046.png" />.
+
e) The construction of statistical estimators for the unobservable values of the general factors $  \mathbf f $.
  
 
f) An algorithmic-computational realization of the statistical-estimation and hypothesis-testing procedures.
 
f) An algorithmic-computational realization of the statistical-estimation and hypothesis-testing procedures.
Line 35: Line 98:
 
Most work concerning theoretically based solutions of this list of problems has been carried out within the bounds of the linear normal model of factor analysis described above.
 
Most work concerning theoretically based solutions of this list of problems has been carried out within the bounds of the linear normal model of factor analysis described above.
  
However, in practical applications one makes wide use of more general versions of models of factor analysis: non-linear models, models constructed from non-quantitative variables, models operating with three-dimensional matrices of initial data (to the two traditional dimensions of the original data — the dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806047.png" /> and the number of observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806048.png" /> — is added one more space or time coordinate). Such models are not, as a rule, accompanied by any sort of convincing mathematical-statistical analysis of their properties, but are based on computational credentials of a heuristic or semi-heuristic character.
+
However, in practical applications one makes wide use of more general versions of models of factor analysis: non-linear models, models constructed from non-quantitative variables, models operating with three-dimensional matrices of initial data (to the two traditional dimensions of the original data — the dimension $  p $
 +
and the number of observations $  n $—  
 +
is added one more space or time coordinate). Such models are not, as a rule, accompanied by any sort of convincing mathematical-statistical analysis of their properties, but are based on computational credentials of a heuristic or semi-heuristic character.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H.H. Harman,  "Modern factor analysis" , Univ. Chicago Press  (1976)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.A. Aivazyan,  Z.I. Bezhaeva,  O.V. Staroverov,  "Classifying multivariate observations" , Moscow  (1974)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  C. Spearman,  ''Amer. J. Psychology'' , '''15'''  (1974)  pp. 201–293</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  T.W. Anderson,  H. Rubin,  "Statistical inference in factor analysis" , ''Proc. 3-rd Berkeley Symp. Math. Statist.'' , '''5''' , Univ. California Press  (1956)  pp. 111–150</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  C.R. Rao,  "Estimation and tests of significance in factor analysis"  ''Psychometrika'' , '''20'''  (1955)  pp. 93–111</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H.H. Harman,  "Modern factor analysis" , Univ. Chicago Press  (1976)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.A. Aivazyan,  Z.I. Bezhaeva,  O.V. Staroverov,  "Classifying multivariate observations" , Moscow  (1974)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  C. Spearman,  ''Amer. J. Psychology'' , '''15'''  (1974)  pp. 201–293</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  T.W. Anderson,  H. Rubin,  "Statistical inference in factor analysis" , ''Proc. 3-rd Berkeley Symp. Math. Statist.'' , '''5''' , Univ. California Press  (1956)  pp. 111–150</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  C.R. Rao,  "Estimation and tests of significance in factor analysis"  ''Psychometrika'' , '''20'''  (1955)  pp. 93–111</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
There is a tremendous amount of literature on factor analysis nowadays. See, e.g., the journal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038060/f03806049.png" /> and [[#References|[a1]]]. The classical factor analysis model described in the main article above is nowadays considered as a special member of the class of linear structured models, cf. [[#References|[a2]]], [[#References|[a3]]].
+
There is a tremendous amount of literature on factor analysis nowadays. See, e.g., the journal $  Psychometrika $
 +
and [[#References|[a1]]]. The classical factor analysis model described in the main article above is nowadays considered as a special member of the class of linear structured models, cf. [[#References|[a2]]], [[#References|[a3]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.N. Lawley,  A.E. Maxwell,  "Factor analysis as a statistical method" , Butterworths  (1971)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K.G. Jöreskog,  D. Sörbom,  "Lisrel IV. Analysis of linear structural relationships by maximum likelihood, instrumental variables, and least squares methods" , Sci. Software  (1984)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  B.S. Everitt,  "An introduction to latent variable methods" , Chapman &amp; Hall  (1984)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.N. Lawley,  A.E. Maxwell,  "Factor analysis as a statistical method" , Butterworths  (1971)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K.G. Jöreskog,  D. Sörbom,  "Lisrel IV. Analysis of linear structural relationships by maximum likelihood, instrumental variables, and least squares methods" , Sci. Software  (1984)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  B.S. Everitt,  "An introduction to latent variable methods" , Chapman &amp; Hall  (1984)</TD></TR></table>

Latest revision as of 19:38, 5 June 2020


A branch of multi-dimensional statistical analysis that brings together mathematical and statistical methods for reducing the dimension of a multi-dimensional indicator $ \mathbf x = ( x _ {1} \dots x _ {p} ) ^ \prime $ under investigation. That is, for constructing — by investigating the structure of the correlations between the components $ x _ {i} , x _ {j} $, $ i , j = 1 \dots p $— models that enable one to establish (within some random error of prognosis $ \epsilon $) the values of the $ p $ analyzable components of $ \mathbf x $ from a substantially smaller number $ m $, $ m \ll p $, the so-called general (not immediately observable) factors $ \mathbf f = ( f _ {1} \dots f _ {m} ) ^ \prime $.

The simplest version of the formalization of a problem posed like this is provided by the linear normal model of factor analysis with orthogonal general factors and uncorrelated residuals:

$$ \tag{1 } x _ {k} = \ \sum _ {j = 1 } ^ { m } q _ {kj} f _ {j} + \epsilon _ {k} ,\ \ k = 1 \dots p, $$

or, in matrix notation,

$$ \tag{1'} \mathbf x = \ \mathbf q \mathbf f + \pmb\epsilon , $$

where the $ ( p \times m) $ loading matrix of the coefficients of the linear transformation is called the loading matrix of the general factors for the variables in question.

Assume that the vector of specific residuals (errors of prognosis) $ \pmb\epsilon = ( \epsilon _ {1} \dots \epsilon _ {p} ) $ is subject to a $ p $- dimensional normal distribution with zero vector of means and an unknown diagonal covariance matrix $ V _ {\pmb\epsilon } $. The general factor vector $ \mathbf f $, depending on the specific nature of the problem to be solved, can be interpreted either as an $ m $- dimensional random variable with a covariance matrix $ V _ {\mathbf f } $ of special form, namely the unit matrix (that is, $ V _ {\mathbf f } = I _ {m} $), or as a vector of unknown non-random parameters (mutually orthogonal and normalized), the values of which change from one observation to another.

If it is assumed that the variables have been centred beforehand (that is, $ {\mathsf E} \mathbf x = 0 $), then from (1'}), in view of the assumptions made, one immediately obtains the following relation connecting the covariance matrices of the vectors $ \mathbf x $ and $ \pmb\epsilon $ and the loading matrix:

$$ \tag{2 } V _ {\mathbf x } = \ \mathbf q \mathbf q ^ \prime + V _ {\pmb\epsilon } . $$

In carrying out an actual statistical analysis the researcher has available only estimates of the elements of the covariance matrix $ V _ {\mathbf x } $( obtained from the observations $ \mathbf x _ {1} \dots \mathbf x _ {n} $), — the elements $ q _ {ki} $ of the loading matrix $ \mathbf q $ and the variance $ v _ {k k } = {\mathsf D} \epsilon _ {k } $ of the specific residuals $ \epsilon _ {k } $ are unknown and remain to be determined.

Thus, in carrying out factor analysis, the researcher has to solve the following main problems.

a) Whether there exists or whether it is legitimate to use a model of type (1). Far from every covariance matrix $ V _ {\mathbf x } $ can be represented in the form (2). The problem reduces to testing the hypothesis that there is a special structure of correlation between the components of the vector $ \mathbf x $ in question.

b) Whether a model of type (1) is unique (identifying it). The principal difficulty in computing and interpreting a model consists in the fact that for $ m > 1 $ neither the structural parameters nor the factors themselves are uniquely determined. If the pair $ ( \mathbf q , V _ {\pmb\epsilon } ) $ satisfies (2), then the pair $ ( \mathbf q\mathbf c , V _ {\pmb\epsilon } ) $, where $ \mathbf c $ is an orthogonal $ ( m \times m) $- matrix, will also satisfy (2). One usually ascertains under what additional a priori restrictions on $ \mathbf q $, $ V _ {\pmb\epsilon } $ the parameters $ \mathbf q $, $ \mathbf f $ and $ V _ {\pmb\epsilon } $ of the model to be analyzed are unique. The possibility of orthogonally transforming the solution of the factor model also enables one to obtain the solution with the most natural interpretation.

c) The statistical estimation (from the observations $ \mathbf x _ {1} \dots \mathbf x _ {n} $) of the unknown structural parameters $ \mathbf q $ and $ V _ {\pmb\epsilon } $.

d) The statistical testing of a series of hypotheses concerning the nature of the model (linearity, non-linearity, etc.) and the values of its structural parameters, such as a hypothesis on the true number of general factors, a hypothesis on the adequacy of the chosen model in relation to the available observed results, a hypothesis on the statistical significance of the difference from zero of the coefficients $ q _ {ij} $, etc.

e) The construction of statistical estimators for the unobservable values of the general factors $ \mathbf f $.

f) An algorithmic-computational realization of the statistical-estimation and hypothesis-testing procedures.

Most work concerning theoretically based solutions of this list of problems has been carried out within the bounds of the linear normal model of factor analysis described above.

However, in practical applications one makes wide use of more general versions of models of factor analysis: non-linear models, models constructed from non-quantitative variables, models operating with three-dimensional matrices of initial data (to the two traditional dimensions of the original data — the dimension $ p $ and the number of observations $ n $— is added one more space or time coordinate). Such models are not, as a rule, accompanied by any sort of convincing mathematical-statistical analysis of their properties, but are based on computational credentials of a heuristic or semi-heuristic character.

References

[1] H.H. Harman, "Modern factor analysis" , Univ. Chicago Press (1976)
[2] S.A. Aivazyan, Z.I. Bezhaeva, O.V. Staroverov, "Classifying multivariate observations" , Moscow (1974) (In Russian)
[3] C. Spearman, Amer. J. Psychology , 15 (1974) pp. 201–293
[4] T.W. Anderson, H. Rubin, "Statistical inference in factor analysis" , Proc. 3-rd Berkeley Symp. Math. Statist. , 5 , Univ. California Press (1956) pp. 111–150
[5] C.R. Rao, "Estimation and tests of significance in factor analysis" Psychometrika , 20 (1955) pp. 93–111

Comments

There is a tremendous amount of literature on factor analysis nowadays. See, e.g., the journal $ Psychometrika $ and [a1]. The classical factor analysis model described in the main article above is nowadays considered as a special member of the class of linear structured models, cf. [a2], [a3].

References

[a1] J.N. Lawley, A.E. Maxwell, "Factor analysis as a statistical method" , Butterworths (1971)
[a2] K.G. Jöreskog, D. Sörbom, "Lisrel IV. Analysis of linear structural relationships by maximum likelihood, instrumental variables, and least squares methods" , Sci. Software (1984)
[a3] B.S. Everitt, "An introduction to latent variable methods" , Chapman & Hall (1984)
How to Cite This Entry:
Factor analysis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Factor_analysis&oldid=19273
This article was adapted from an original article by S.A. Aivazyan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article