### 6.3 Factor Analysis

Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors.

In many scientific fields, particularly behavioral and social sciences, variables such as ‘intelligence’ or ‘leadership quality’ cannot be measured directly. Such variables, called latent variables, can be measured by other ‘quantifiable’ variables, which reflect the underlying variables of interest. Factor analysis attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable.

Factor analysis closely resembles principal components analysis. Both techniques use linear combinations of variables to explain sets of observations on many variables. In principal components, the intrinsic interest is in the observed variables. The combination of variables is primarily a tool for simplifying the interpretation of the observed variables. In factor analysis, the intrinsic interest is in the ‘underlying factors’, the observed variables are relatively of little interest. Linear combinations are formed to derive the factors.

Factor analysis model

The factor analysis model can be expressed in the matrix notation:

x = μ + Л f + U                                                                                                                               (1)

where

Λ = {l ij} is a p ´ k matrix of constants, called the matrix of factors loadings.

f = random vector representing the k common factors.

U = random vector representing p unique factors associated with the original variables.

The common factors F1, F2, …,Fk are common to all X variables, and are assumed to have mean=0 and variance =1.. The unique factors are unique to Xi. The unique factors are also assumed to have mean=0 and are uncorrelated to the common factors.

Equivalently, the covariance matrix S can be decomposed into a factor covariance matrix and an error covariance matrix:

S = Л Л T + Ψ                                                                                                                                    (2)

where

Y = Var {u}

Л T is the transpose of Л

The diagonal of the factor covariance matrix is called the vector of communalities where

The factor loadings are the correlation coefficients between the variables and factors. Factor loadings are the basis for imputing a label to different factors. Analogous to Pearson's r, the squared factor loading is the percentage of variance in the variable, explained by a factor.

The sum of the squared factor loadings for all factors for a given variable is the variance in that variable accounted for by all the factors, and this is called the communality. In complete principal components analysis, with no factors dropped, communality is equal to 1.0, or 100% of the variance of the given variable.

The factor analysis model does not extract all the variance; it extracts only that proportion of variance, which is due to the common factors and shared by several items. The proportion of variance of a particular item that is due to common factors (shared with other items) is called communality. The proportion of variance that is unique to each item is then the respective item's total variance minus the communality.

The solution of equation (2) is not unique (unless the number of factors = 1), which means that the factor loadings are inherently indeterminate. Any solution can be rotated arbitrarily to obtain a new factor structure.

Various rotation strategies are proposed in the literature: Varimax, Oblimin, Quartimin, but the most common rotation strategy is the Varimax rotation. The goal of these rotation strategies is to obtain a clear pattern of loadings, i .e., the factors are somehow clearly marked by high loadings for some variables and low loadings for other variables. This general pattern is called ‘Simple Structure’.

Varimax rotation seeks to maximize the variances of the squared normalized factor loadings across variables for each factor. This is equivalent to maximizing the variances in the columns of the matrix of the squared normalized factor loadings. :

Eigenvalues: The eigenvalue for a given factor reflects the variance in all the variables, which is accounted for by that factor. A factor's eigenvalue may be computed as the sum of its squared factor loadings for all the variables. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If a factor has a low eigenvalue, then it is contributing little to the explanation of variances in the variables and may be ignored

Note that the eigenvalues associated with the unrotated and rotated solution will differ, though their total will be the same.