Namespaces
Variants
Actions

Factor analysis

From Encyclopedia of Mathematics
Jump to: navigation, search


A branch of multi-dimensional statistical analysis that brings together mathematical and statistical methods for reducing the dimension of a multi-dimensional indicator $ \mathbf x = ( x _ {1} \dots x _ {p} ) ^ \prime $ under investigation. That is, for constructing — by investigating the structure of the correlations between the components $ x _ {i} , x _ {j} $, $ i , j = 1 \dots p $— models that enable one to establish (within some random error of prognosis $ \epsilon $) the values of the $ p $ analyzable components of $ \mathbf x $ from a substantially smaller number $ m $, $ m \ll p $, the so-called general (not immediately observable) factors $ \mathbf f = ( f _ {1} \dots f _ {m} ) ^ \prime $.

The simplest version of the formalization of a problem posed like this is provided by the linear normal model of factor analysis with orthogonal general factors and uncorrelated residuals:

$$ \tag{1 } x _ {k} = \ \sum _ {j = 1 } ^ { m } q _ {kj} f _ {j} + \epsilon _ {k} ,\ \ k = 1 \dots p, $$

or, in matrix notation,

$$ \tag{1'} \mathbf x = \ \mathbf q \mathbf f + \pmb\epsilon , $$

where the $ ( p \times m) $ loading matrix of the coefficients of the linear transformation is called the loading matrix of the general factors for the variables in question.

Assume that the vector of specific residuals (errors of prognosis) $ \pmb\epsilon = ( \epsilon _ {1} \dots \epsilon _ {p} ) $ is subject to a $ p $- dimensional normal distribution with zero vector of means and an unknown diagonal covariance matrix $ V _ {\pmb\epsilon } $. The general factor vector $ \mathbf f $, depending on the specific nature of the problem to be solved, can be interpreted either as an $ m $- dimensional random variable with a covariance matrix $ V _ {\mathbf f } $ of special form, namely the unit matrix (that is, $ V _ {\mathbf f } = I _ {m} $), or as a vector of unknown non-random parameters (mutually orthogonal and normalized), the values of which change from one observation to another.

If it is assumed that the variables have been centred beforehand (that is, $ {\mathsf E} \mathbf x = 0 $), then from (1'}), in view of the assumptions made, one immediately obtains the following relation connecting the covariance matrices of the vectors $ \mathbf x $ and $ \pmb\epsilon $ and the loading matrix:

$$ \tag{2 } V _ {\mathbf x } = \ \mathbf q \mathbf q ^ \prime + V _ {\pmb\epsilon } . $$

In carrying out an actual statistical analysis the researcher has available only estimates of the elements of the covariance matrix $ V _ {\mathbf x } $( obtained from the observations $ \mathbf x _ {1} \dots \mathbf x _ {n} $), — the elements $ q _ {ki} $ of the loading matrix $ \mathbf q $ and the variance $ v _ {k k } = {\mathsf D} \epsilon _ {k } $ of the specific residuals $ \epsilon _ {k } $ are unknown and remain to be determined.

Thus, in carrying out factor analysis, the researcher has to solve the following main problems.

a) Whether there exists or whether it is legitimate to use a model of type (1). Far from every covariance matrix $ V _ {\mathbf x } $ can be represented in the form (2). The problem reduces to testing the hypothesis that there is a special structure of correlation between the components of the vector $ \mathbf x $ in question.

b) Whether a model of type (1) is unique (identifying it). The principal difficulty in computing and interpreting a model consists in the fact that for $ m > 1 $ neither the structural parameters nor the factors themselves are uniquely determined. If the pair $ ( \mathbf q , V _ {\pmb\epsilon } ) $ satisfies (2), then the pair $ ( \mathbf q\mathbf c , V _ {\pmb\epsilon } ) $, where $ \mathbf c $ is an orthogonal $ ( m \times m) $- matrix, will also satisfy (2). One usually ascertains under what additional a priori restrictions on $ \mathbf q $, $ V _ {\pmb\epsilon } $ the parameters $ \mathbf q $, $ \mathbf f $ and $ V _ {\pmb\epsilon } $ of the model to be analyzed are unique. The possibility of orthogonally transforming the solution of the factor model also enables one to obtain the solution with the most natural interpretation.

c) The statistical estimation (from the observations $ \mathbf x _ {1} \dots \mathbf x _ {n} $) of the unknown structural parameters $ \mathbf q $ and $ V _ {\pmb\epsilon } $.

d) The statistical testing of a series of hypotheses concerning the nature of the model (linearity, non-linearity, etc.) and the values of its structural parameters, such as a hypothesis on the true number of general factors, a hypothesis on the adequacy of the chosen model in relation to the available observed results, a hypothesis on the statistical significance of the difference from zero of the coefficients $ q _ {ij} $, etc.

e) The construction of statistical estimators for the unobservable values of the general factors $ \mathbf f $.

f) An algorithmic-computational realization of the statistical-estimation and hypothesis-testing procedures.

Most work concerning theoretically based solutions of this list of problems has been carried out within the bounds of the linear normal model of factor analysis described above.

However, in practical applications one makes wide use of more general versions of models of factor analysis: non-linear models, models constructed from non-quantitative variables, models operating with three-dimensional matrices of initial data (to the two traditional dimensions of the original data — the dimension $ p $ and the number of observations $ n $— is added one more space or time coordinate). Such models are not, as a rule, accompanied by any sort of convincing mathematical-statistical analysis of their properties, but are based on computational credentials of a heuristic or semi-heuristic character.

References

[1] H.H. Harman, "Modern factor analysis" , Univ. Chicago Press (1976)
[2] S.A. Aivazyan, Z.I. Bezhaeva, O.V. Staroverov, "Classifying multivariate observations" , Moscow (1974) (In Russian)
[3] C. Spearman, Amer. J. Psychology , 15 (1974) pp. 201–293
[4] T.W. Anderson, H. Rubin, "Statistical inference in factor analysis" , Proc. 3-rd Berkeley Symp. Math. Statist. , 5 , Univ. California Press (1956) pp. 111–150
[5] C.R. Rao, "Estimation and tests of significance in factor analysis" Psychometrika , 20 (1955) pp. 93–111

Comments

There is a tremendous amount of literature on factor analysis nowadays. See, e.g., the journal $ Psychometrika $ and [a1]. The classical factor analysis model described in the main article above is nowadays considered as a special member of the class of linear structured models, cf. [a2], [a3].

References

[a1] J.N. Lawley, A.E. Maxwell, "Factor analysis as a statistical method" , Butterworths (1971)
[a2] K.G. Jöreskog, D. Sörbom, "Lisrel IV. Analysis of linear structural relationships by maximum likelihood, instrumental variables, and least squares methods" , Sci. Software (1984)
[a3] B.S. Everitt, "An introduction to latent variable methods" , Chapman & Hall (1984)
How to Cite This Entry:
Factor analysis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Factor_analysis&oldid=46898
This article was adapted from an original article by S.A. Aivazyan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article