#### 6.5.8 Mathematics of Correspondence Analysis

Notation

Contingency table N (I´ J)

Row mass = row sums/grand total = ni+/n

Column mass = column sums/grand total = n+j/n

Correspondence matrix is defined as the original table (or matrix) N divided by the grand total n.

The matrix of row profiles can also be defined as the rows of the correspondence matrix P divided by their respective row sums (i.e. row masses), which can be written as:

Matrix of row profile = Dr –1 P

where Dr is the diagonal matrix of row masses.

The matrix of column profiles consists of the columns of the correspondence matrix P divided by their respective column sums.

Matrix of column profiles = Dc– 1 P

where Dc is the diagonal matrix of the column masses.

The correspondence analysis problem is to find a low-dimensional approximation to the original data matrix that represents both the row and column profiles

R= Dr –1 P

C= Dc– 1 P

In a low k-dimensional subspace, where k is less than I or J. These two k-dimensional subspaces (one for the row profiles and one for the column profiles) have a geometric correspondence that enables us to represent both the rows and columns in the same display.

Since we wish to graphically represent the distances between row (or column) profiles, we orient the configuration of points at the center of gravity of both sets. The centroid of the set of row points in its space is the vector of column masses. The centroid of the set of column point in its space is r, the vector of row masses. This is the average column profile.

To perform the analysis with respect to the center of gravity, P is centered "symmetrically" by rows and columns, i.e., P-rcT so that it correspondence to the average profiles of both sets of points. The solution to finding a representation of both sets of points is the singular value decomposition of the matrix of standardized residuals i.e, I ´ J matrix with elements:

The singular value decomposition (SVD) is defined as the decomposition of an I´ J matrix A as the product of three matrices

A=U G V T                                                                                                                                          (1)

where the matrix G is a diagonal matrix of positive numbers in decreasing order:

g 1 ³ g 2 ³ ……g n ³ 0                                                                                                                               (2)

where k is the rank of A, and the columns of the matrices U and V are orthonormal, i.e.,

UTU=I                        V T V=I                                                                                                             (3)

where UT is the transpose of U, and VT is the transpose of V.

g 1, g 2, ……,g k are called singular values.

Columns of U (u1, u2, ……,uk) are called left singular vectors.

Columns of V (v1, v2, ……,vk) are called right singular vectors.

Consider a set of I points in J-dimensional space, where coordinates are in the rows of the matrix Y with masses m1, m2, ……,mI assigned to the respective points, where the space is structured by the weighted Euclidean (with dimension weights q1, q2, ……,qJ associated with the respective dimensions). In other words, the distance between any two points, say x and y, is equal to

(xy) T D q (xy) ] ½                                                                                                                        (4)

Let Dm and Dq be the diagonal matrices of point masses and dimension weights respectively

Let m be the vector of point messes (we have already assumed that ):

I T m= I

where I is the vector of ones.

Any low-dimensional configuration of the points can be derived directly from the singular value decomposition of the matrix:

(5)

where is the centroid of the rows of Y.

Applying singular value decomposition to the above equation, we find that principal coordinates of row points (i.e. projections of row profiles onto principal axes) are contained in the following matrix:

F= Dm½ U G                                                                                                           (6)

The coordinates of the points in an optimal a -dimensional subspace are contained in the first a columns. The principal axes of this space are contained in the matrix

A = D q - ½V

Here, we have two special cases of the above general result, viz. Row problem and Column problem. These problems involve the reduction of dimensionality of the row profiles and the column profiles, where each set of points has its associated masses and Chi-square distances. Both these problems reduce to singular value decomposition of the same matrix of standardized residuals.

Row problem

The row problem consists of a set of I profiles in the rows of = Dr –1 P with masses r in the diagonal matrix Dr in a space with distance defined by the diagonal matrix Dr –1. The centroid of the row profiles can be derived as follows

r T D r - 1P = I T P = c T

where c T is the row vector of the column masses

The matrix A in (Equation 5) can be written as

A = Dr1/2(Dr-1P-ICT)Dc-1/2                                                                                                              (7)

which can be rewritten as

A = Dr-1/2 (P-ycT)Dr-1/2                                                                                                                   (8)

Column problem

The column problem consists of a set of J profiles in the columns of P Dc- 1 with masses c in the diagonal of Dc in a space with distance defined by the diagonal matrix D r - 1.

By transposing the matrix P Dc- 1 of column profiles, we obtain Dc-1PT. The centroid of these profiles is (i.e. the row vector of row masses).

The matrix in Equation (5)

(9)

can be written as

This is the transpose of the matrix derived for A., the row problem. It follows that both the row and column problems can be solved by singular value decomposition of the same matrix of standardized residuals:

(10)

The elements of this I´ J matrix are:

(11)

It can be easily seen that the centroid of these profiles is:

(the row vector of r masses)

The matrix in Equation 5 is thus reduced to

(12)

It can be easily seen that the matrix A is the transpose of the matrix derived for the row problem. These results imply that both the row problem and column problems are solved by computing the singular value decomposition of the same matrix (i.e. the matrix of the standard residuals).

(13)

whose elements are:

(14)

It follows from Equation ( 10 ) that the Chi-square statistic can be decomposed into I ´ J components of the form:

The sum of squares of the elements of A is the total inertia of the contingency table.

Total inertia =

which is the chi-square statistic divided by n.

Thus, there are k = min [I-1, J-1] dimensions in the solution. The squares of the singular values of A i.e. the eigenvalues of ATA or AAT also decompose the total inertia. These are denoted by and are called the principal inertias.

The principal coordinates of the row problem are:

G                                                                                                                                         (15)

or in the scalar notation:

(16)

The principal coordinates of the columns are obtained from:

G

or in the scalar notation:

The standard coordinates of the rows are the principal coordinates divided by their respective singular values, i.e.

X=FG -1=                                                                                                                                (17)

or in the scalar notation

The standard coordinates of the columns are the principal coordinates divided by their respective singular values:

Y=GG -1= Dc-1/2V                                                                                                                              (18)

i.e.

Each principal inertia l k is decomposed into components for each row i:

or in the matrix notation

(19)

The contribution of the rows to the principal inertia l k is equal to:

For the ith row, the inertia components for all k axes sum up to the row inertia of the ith row:

The left hand side of the above equation is identical to the sum of squared elements in the ith row of A

or

(20)

There are k = min [I-1, J-1] dimensions in the solution. The square of the singular values of A, are denoted by are called singular values.

The principal coordinates of the rows are obtained using [Equation (6)], for the row problem.

(21)

or in scalar notation:

Similarly the principal coordinates of the columns are obtained using Equation (6), for the column problem.

(22)

i.e.

The standard coordinates of the rows are the principal coordinates divided by their respective singular values:

(23)

i.e.

The standard coordinates of the columns are the principal coordinates divided by their respective singular values:

i.e.