## 8.1 Non-metric Multidimensional Scaling

Input data: The input data is a matrix of dissimilarities:

D =

where d ij is the dissimilarity between objects i and j. Because the data are symmetric (dissimilarity between i and j is the same as that between j and i), d ij = d ji. Since each object is identical to itself, d ij = 0. Hence, one half of the matrix, usually the lower triangle below the diagonal, is used.

Since the data d ij are non-metric, they are interpreted as distance like, but not actual distance. The aim of the MDS is to transform such data into a set of genuine Euclidean distances. The solution i.e., final configuration, consists of an arrangement of points in a small number of dimensions, located such that the distance between the points matches the dissimilarities between the objects as closely as possible.

In non-metric MDS, only the rank order of entries in the data matrix (not the actual dissimilarities) is assumed to contain the significant information. Hence, the distances of the final configuration should as far as possible be in the same rank order as the original data. Thus, the purpose of the non-metric MDS algorithm is to find a configuration of points whose distances reflect as closely as possible the rank order of the data.

It may be pointed out that a perfect ordinal re-scaling of the data into distances is usually not possible. What is required is as good scaling as possible. This essentially involves finding a series of configurations, in which the inter-point distances have more and more close conformity with the data.

Now, the question is how to construct the best possible re-scaling. This can be done through monotone regression, which involves the calculation of a new set of distances, usually called pseudo-distances, since they are not actual distances corresponding to any real configuration. They are also called fitted distances or disparities. The pseudo-distances are denoted as:

The given dissimilarities d ij are used to generate a set of distances dij, which are approximately related to the given dissimilarities d ij by a monotonic increasing function f:

dij

where the function f has the property:

The rank correlation between the d ij and f (d ij) is therefore equal to 1; the rank correlation between d ij and dij is close to 1.

Note that only the rank order is important, and the scaling is ordinal.

The most common approach to determine the elements dij and the underlying configuration x1, x2, …, xp is an iterative process, commonly referred to as the Sheppard-Kruskal algorithm. A simplified view of the MDS algorithm is as follows:

• Assign points to arbitrary coordinates in a p-dimensional space.
• Compute Euclidean distances among all pairs of points, to form the DHAT matrix.
• Compare the DHAT matrix with the input D matrix by evaluating the stress function.
• Adjust coordinates of each point in the direction that maximally reduces the stress.

After determining the dissimilarity matrix D and the corresponding scaling matrix A, an iterative process begins, which successively revises the dissimilarities and object coordinates until an adequate fit is obtained. The objective of the iterative process is to obtain a spatial representation in p dimensions such that the Euclidean distances among the objects are monotonically related to the original dissimilarities.

The iterative process comprises four steps:

• Step 1 (Initial phase) selects the dimensions (p) and determines the initial configuration and the resulting distances
• Step 2 (Non-metric phase) uses monotone regression to relate the to d ij. The estimated regression produces a new set of dissimilarities , called disparities that are monotonically related to the d ij.
• Step 3 (Metric phase) revises the spatial configuration to obtain new distances , which are more closely related to the disparities generated in step 2.
• Step 4 (Evaluation phase) determines the goodness of fit of the distances and the disparities

.

If the fit is not adequate, Steps 2 and 3 are repeated.

The fitting of pseudo-distances in the monotonic regression can be done under any of the following conditions: weak monotonicity and strong monotonicity.

Weak monotonicity allows the fitting of unequal data by equal or unequal pseudo-distances.

Whenever d ij < d kl, then

This procedure allows maximum flexibility in re-scaling the data ordinally.

Strong monotonicty allows the fitting of unequal data by only unequal pseudo-distances:

Whenever , then

Mdscal, which is based on Kruskal’s algorithm, allows weak monotonic regression. The fitting is obtained by minimizing the sum of squares of the differences between the distances and the corresponding pseudo-distances over all the points of the configuration:

Minimize

Treatment of ties

A crucial problem in non-metric multidimensional scaling is how to treat ties in the ordinal data. Should only equal pseudo-distances fit equal distances? There are two main approaches: Primary and Secondary.

Primary approach treats ties as indeterminate and allows the pseudo-distances to preserve the equality or replace it by an inequality. In fact, the ties would be broken if in doing so; the goodness of fit is increased. If d ij = d kl then dij may or may not equal

Secondary approach: In this approach, ties are required to be retained in the fitting values. Consequently, if the actual distances do not preserve every inequality in the data, infraction would be counted as a deviation from monotonicity.

Generally, the primary approach should be preferred to the secondary approach, particularly if there are a large number of distinct dissimilarity values. If, however, there are a small number of distinct values, then allowing the fitting of equivalent distances by disparities in any order may destroy virtually all information.