Environment
and development in coastal regions and in small islands 
Coastal management sourcebooks
3
Part 2 The
Acquisition, Correction and Calibration of Remotely Sensed Data
8  Water Column Correction Techniques 
Summary When trying to map or derive quantitative information about underwater habitats, the depth of water significantly affects the remotely sensed measurement. It does so to such an extent that it can cause us to confuse the spectra of (say) bare sand and seagrass. This chapter describes a method for compensating for the effects of variable depth. Implementation of the method was found to be costeffective for improving the thematic accuracy of coastal habitat maps and essential for deriving empirical relationships between remotely sensed data and features of interest in the marine environment.
Introduction
The previous two chapters dealt with the geometric and radiometric correction of digital imagery. These processes are required for almost all remote sensing applications whether landorientated or marine. This chapter describes an additional preprocessing step which is only required when assessing underwater habitats.
When light penetrates water its intensity decreases exponentially with increasing depth. This process is known as attenuation and it exerts a profound effect on remotely sensed data of aquatic environments. The severity of attenuation differs with the wavelength of electromagnetic radiation (EMR). In the region of visible light, the red part of the spectrum attenuates more rapidly than the shorterwavelength blue part (see Figure 8.1). As depth increases, the separability of habitat spectra declines (Figure 8.1). In practice, the spectra of sand at a depth of 2 m will be very different to that at 20 m – yet the substratum is the same. In fact, the spectral signature of sand at 20 m may be similar to that of seagrass at (say) 3 m . The spectral radiances recorded by a sensor are therefore dependent both on the reflectance of the substrata and on depth. These two influences on the signal will create considerable confusion when attempting to use visual inspection or multispectral classification to map habitats. Since most marine habitat mapping exercises are only concerned with mapping benthic features, it is useful to remove the confounding influence of variable water depth. This chapter describes a fairly straight forward means of compensating for variable depth but it is only truly applicable to clear waters such as those in coral reef environments. For more turbid waters, see the comments at the end of the chapter.

Light attenuation in water
The exponential decay of light intensity with increasing depth results from two processes, absorption and scattering. These processes have been discussed in an atmospheric context (Chapter 7) but it is worth pointing out the nature of attenuation in water.
Absorption involves the conversion of electromagnetic energy into other forms such as heat or chemical energy (e.g. photosynthesis in phytoplankton). The main absorbers in seawater are:
algae (phytoplankton),
inorganic and organic particulate matter in suspension (excluding algae),
dissolved organic compounds (yellow substances) which result from the breakdown of plant tissue,
water itself, which strongly absorbs red light and has a smaller effect on shorter wavelength blue light (hence the blue colour of clear water).
Absorption is wavelengthdependent (Figure 8.1). The chlorophyll in algae appears ‘green’ because it reflects in the central portion of the visible spectrum (i.e. the green) and absorbs strongly at either end. Dissolved organic compounds absorb strongly at the short wavelength (blue) part of the spectrum and reflect strongly in the yellowred end (hence they impart a yellow colour to the water).
EMR may interact with suspended particles in the water column and change direction. This process of scattering is largely caused by inorganic and organic particulate matter and increases with the suspended sediment load (turbidity) of the water. For further details see Open University (1989).
Classification of water bodies
The clarity of water bodies varies on many scales. For example, many coastal areas exhibit a seaward gradient of turbid to clear waters created by increases in depth (i.e. less resuspension of sediments) and reduced input from terrestrial sources such as sedimentladen rivers. On a large scale, oceans also vary in their overall turbidity.
Jerlov (1951) formally classified oceanic water types according to their optical attenuation properties. Type I waters were represented by extremely clear oceanic waters. Most clear coastal waters were classified as Type II because attenuation tends to be greater than that for oceanic waters of low productivity. However, many water bodies were found to lie between Types I and II and the former was subsequently split into Types IA and IB (Jerlov 1964). Most coral reef waters fall into categories I or II. Type III waters are fairly turbid and some regions of coastal upwelling are so turbid that they are unclassified. The subject will not be explored further here; it was outlined because these terms are often mentioned in the literature. For further information, readers are referred to Jerlov (1976).
Compensating for the influence of variable depth on spectral data
Removal of the influence of depth on bottom reflectance would require: (i) a measurement of depth for every pixel in the image, and (ii) a knowledge of the attenuation characteristics of the water column (e.g. concentrations of dissolved organic matter). Good digital elevation models of depth are rare, particularly for coral reef systems where charts are often inaccurate (but see Zainal 1994). As a compromise, Lyzenga (1978, 1981) put forward a simple imagebased approach to compensate for the effect of variable depth when mapping bottom features (hereafter referred to as water column correction). Rather than predicting the reflectance of the seabed, which is prohibitively difficult, the method produces a ‘depthinvariant bottom index’ from each pair of spectral bands. The technique was tested for water in the Bahamas and is only appropriate where water clarity is good (i.e. most reef and seagrass areas; Jerlov water Types I or II). The procedure is divided into several steps.
Do not be put off by the equations described below. They are easily implemented as shown in the worked example which follows.
Step 1   removal of scattering in the atmosphere and external reflection from water surface 
Most published accounts of the watercorrection method suggest prior application of a crude atmospheric correction (Lyzenga 1978, 1981, Spitzer and Dirks 1987, Armstrong 1993, Maritorena 1996). This is based on the ‘dark pixel subtraction’ method described in Chapter 7. A large number of pixels are sampled from ‘deep water’ and their average radiance (or DN) is then subtracted from all other pixels in each band respectively:
Atmospherically corrected radiance = L_{i}  L_{si}
where L_{i} is the pixel radiance in band i and L_{si} is the average radiance for deep water in band i. However, if a full atmospheric correction has already taken place so that pixel values have been converted to surface reflectance (Chapter 7), this process is unnecessary and values of surface reflectance can be used directly. In short, L_{si} can be ignored if a full atmospheric correction has been undertaken and this is preferred to using the cruder, dark pixel subtraction method.
Step 2   linearise relationship between depth and radiance 
In relatively clear water, the intensity of light will decay exponentially with increasing depth (Figure 8.2). If values of light intensity (radiance) are transformed using natural logarithms (ln), this relationship with depth becomes linear (Figure 8.2, step 1). Transformed radiance values will therefore decrease linearly with increasing depth. If X_{i} is the transformed radiance of a pixel in band i, this step is written as:
X_{i} = ln (L_{i}  L_{si} )
for data which have not been atmospherically corrected;
X_{i} = ln ( L_{i } )
for data which have been atmospherically corrected.

Step 3   calculate the ratio of attenuation coefficients for band pairs 
The irradiance diffuse attenuation coefficient (hereafter referred to as attenuation coefficient, k) describes the severity of light attenuation in water for that spectral band. It is related to radiance and depth by the following equation where a is a constant, r is the reflectance of the bottom and z is depth:
L_{i} = L_{si} + a.r.e^{(2k}i^{z) }
Theoretically, it would be possible to rearrange the equation and generate an image of bottom type, r (reflectance) which is the measurement we seek. However, this approach is not feasible because there are too many unknown quantities – i.e. the value of the constant a, the attenuation coefficient for each band and the depth of water at each pixel. The method developed by Lyzenga does not require the actual calculation of these parameters but gets around the problem by using information from more than one band. All that is required is the ratio of attenuation coefficients between pairs of spectral bands. Use of ratios cancels out many of these unknowns and the ratios can be determined from the imagery itself. Two bands are selected and a biplot made of (log transformed) radiances (or reflectances) for the same substratum at differing depths (Figure 8.2, step 2). Since the effect of depth on measured radiance has been linearised and the substratum is constant, pixel values for each band will vary linearly according to their depth (i.e. points will fall on this straight line). The slope of the biplot represents the relative amounts of attenuation in each band. In fact, the slope represents the ratio of attenuation coefficients between bands. Conceptually, the line represents an axis of radiance (reflectance) values for a unique bottom type. As one moves along the line, the only change is depth.
Step 4  
generation of a depthinvariant index of bottom type 
If radiance (reflectance) values for another bottom type were added to the biplot (Figure 8.2, step 3),a similar line would be obtained – once again, the only change between data points would be depth. However, since the second bottom type will not have the same reflectance as the first, the new line will be displaced either above or below the existing line (e.g. if line 1 was derived from sand which generally has a high reflectance, and line 2 was generated from seagrass with lower reflectance, the latter line would lie below that for sand). The gradient of each line should be identical because the ratio of attenuation coefficients k_{i}/k_{j} is only dependent on the wavelength of the bands and clarity of the water.
An index of bottom type can be obtained by noting the yintercept for each bottom type (Figure 8.2, step 3). For example, while pixel values lying on the line for sand show considerable variation in radiance, they all represent the same bottom type and have the same yintercept. The yintercept for pixels of seagrass is considerably different. The yaxis therefore becomes an axis (or index) of bottom type.
Of course, not all pixel values for a given bottom type lie along a perfectly straight line (Figure 8.3). This is because of natural variation in bottom reflectance, patches of turbid water and sensor noise. Nevertheless, each pixel can be assigned an index of bottom type once the ratio of attenuation coefficients has been estimated (k_{i}/k_{j}). This is accomplished by ‘connecting’ each pixel on the biplot to the yaxis using an imaginary line of gradient k_{i}/k_{j}. Pixel values on the biplot are then converted to their corresponding positions on the yaxis (index of bottom type). Using this method, each pixel value is converted to an index of bottom type, which is independent of depth. These depthinvariant indices of bottom type lie along a continuum but pixels from similar habitats will have similar indices.

The mathematics of the depthinvariant index are simple and are based on the equation of a straight line:
y = p + q . x
where p is the yintercept, q is the gradient of the regression of y on x. The equation can be rearranged to give the yintercept:
p = y  q . x
which in the case of Figure 8.2 (step 3) becomes,
or, if a full atmospheric correction has been undertaken already,
Each pair of spectral bands will produce a single depthinvariant band of bottom type. If the imagery has several bands with good water penetration properties (e.g. Landsat TM, CASI), multipledepth invariant bands can be created. The depth invariant bands may then be used for image processing or visual inspection instead of the original bands.
The theory described here scales the depthinvariant index to the yintercept. Lyzenga (1978, 1981) provided a further modification which rescaled the values to an axis orthogonal to the biplot slope. However, such a refinement does not alter the functionality of the process and it has not been included here for simplicity’s sake.
When to implement water column correction
The methods described in this chapter have three main applications for mapping coastal habitats:
Multispectral classification of marine habitats. The accuracies of some habitat maps of the Turks and Caicos Islands showed significant improvement once water column correction had taken place. Improvements to accuracy were greatest for imagery which had at least three waterpenetrating spectral bands, from which three (or more) depthinvariant bottom indices could be created and input to the spectral classification (for a more detailed discussion, refer to Chapter 11).
Establishing quantitative empirical relationships between image data and marine features. Water column correction was found to be essential for relating digital image data to seagrass standing crop (see Chapter 16).
Visual interpretation of digital data. By removing much of the depthinduced variation in spectral data, water column correction makes for a better visual assessment of habitat types.
Worked example of depthinvariant processing Band selection Pairs of spectral bands are selected which have different bottom reflectances but good penetration of water (i.e. visible wavebands) e.g. Landsat TM 1/2, 1/3, 2/3; Landsat MSS 1/2; SPOT XS 1/2. Two CASI bands are presented in this example (although the process was carried for several band pairs). Calculation of deep water radiance (reflectance) in the absence of a full atmospheric correction Calculation of the parameter L_{si}, deep water radiance (reflectance) does not take long. A group of pixels are selected which represent deep water (i.e. greater than 40 m). The pixel data are analysed either in the image processing software or transferred to a spreadsheet (e.g. Microsoft Excel). The mean and standard deviation of radiance (reflectance) are calculated for each band. Armstrong (1993) recommended subtracting two standard deviations from the mean to account for sensor noise. This (lower) value is then used as L_{si} in subsequent calculations. Selection of pixels of uniform substratum and variable depth A series of groups of pixels are selected from the imagery that has the same bottom type but different depths (totalling approximately 100 pixels). Areas of submerged sand are good because they are fairly recognisable to an interpreter without much field experience (Figure 8.4). Both bands should exhibit attenuation so avoid choosing areas of the imagery in which one of the bands is saturated. For example, shallow water (< 1 m) reflectances for Landsat TM band 1 will show little variation whereas reflectance for band 3 may vary considerably. The resulting biplot will have a gradient close to zero which cannot be used to determine the ratio of attenuation coefficients. Similarly, if the area chosen is too deep, one band may not penetrate and will only contain one value (L si  the deep water radiance for that band). Pixel data for the selected areas in both bands are transferred to a spreadsheet and converted to natural logarithms.
Calculation of ratio of attenuation coefficients Biplots of the transformed data are made and examined (Figure 8.3). Regions of saturation (vertical or horizontal lines of pixels on the biplot) are removed. The gradient of the line is not calculated using conventional least squares regression analysis (which is the standard equation given by most statistical packages). This is because the result depends on which band is chosen to be the dependent variable. Therefore, rather than calculating the mean square deviation from the regression line in the direction of the dependent variable, the regression line is placed where the mean square deviation (measured perpendicular to the line) is minimised. The following equations are used. Most spreadsheets have functions allowing the calculation of variance and covariance ( _{ii} is the variance of band i, _{ij} is the covariance between bands i and j) .
where
and
(i.e. you will need a new column in the spreadsheet which multiplies X_{i} by X_{j}. From left to right, the inputs to the above equation are the average of this column (the mean of the products of X_{i} and X_{j}), the average of X_{i} and the average of X_{j}). In practice, this method of calculating the biplot gradient (k_{i}_{ }/k_{j}) does not differ largely from the standard least squares method. For example, the standard calculation of gradient from CASI data in Figure 8.3 gave a value of 0.73. Calculation of the gradient by minimising the mean square deviation perpendicular to the regression line resulted in a value of 0.74. The latter value was used for the processing algorithm. Implementation of depthinvariant algorithm to whole image (band pairs) Prior to execution of depthinvariant processing all areas of land and cloud should be masked out. It is best to set pixels for these areas to zero. The depthinvariant algorithm is implemented in image processing software once the ratios of attenuation coefficients have been calculated for band pairs. In keeping with the theory provided above, the following equation is used: e.g. for bands i and j in the absence of a full atmospheric correction;
This can be implemented in one process. For example, a flow model of the process would have the following structure (Figure 8.5). The results are presented in Figure 8.6 (Plate 7).
If some values of depthinvariant bottomindex are negative, an offset can be incorporated to make all data positive. If the minimum value was 8.3 and the maximum value was 10.5, all values could be offset by adding 8.4. The new scale would be 0.1  18.9. To demonstrate the effectiveness of the depthinvariant transformation, the statistical precision of radiance values can be calculated before and after the transformation. For example, 115 pixels were sampled from sand areas of variable depth to determine the ratio of attenuation coefficients of Landsat MSS bands 1 and 2. The precision (standard error/mean) of these values was 0.008 for band 1 and 0.019 for band 2. This is because band 2 undergoes greater attenuation than band 1 and therefore has more variable values. The precision of depthinvariant bottomindex values for the same pixels was one to two orders of magnitude greater at 0.0006. The variability in radiance due to depth has been greatly reduced for a given substratum type. 
Time needed to undertake water column correction
The subject took approximately five days to research and the first water correction process took two days to implement (for a pair of SPOT bands). However, once the process had been carried out and the technique had been implemented in software, further bands took less than half a day to analyse. If the process outlined above is followed, we estimate that a satellite scene can be corrected in one and a half days and that subsequent band pairs will only take several hours of analysis each.
Published values of attenuation coefficients and their ratios
The attenuation coefficients of different spectral bands will vary according to the specific water types being sampled in the imagery. However, several authors have published attenuation coefficients and their ratios. While these cannot be used directly because they are image specific, they may provide a general guideline for those experimenting with the method for the first time (Table 8.1).
Table 8.1 Some published values of attenuation coefficients and their ratios for guidance purposes. For further details refer to original papers  
Location  Sensorand band  Ratio of attenuation coefficients  Attenuation coefficient (m^{1})  Reference 
Bahamas TCI Bahamas Bahamas Bahamas TCI TCI TCI TCI Pacific Ocean Pacific Ocean Pacific Ocean Pacific Ocean Moorea, S. Pacific Moorea, S. Pacific Moorea, S. Pacific Moorea, S. Pacific 
Landsat MSS 1/2 Landsat MSS 1/2 Landsat TM 1/2 Landsat TM 2/3 Landsat TM 1/3 Landsat TM 1/2 Landsat TM 2/3 Landsat TM 1/3 SPOT XS 1/2 Landsat TM 1 SPOT XS 1 SPOT XS 2 SPOT XS 1/2 Landsat TM 1 SPOT XS 1 SPOT XS 2 SPOT XS 1/2 
0.24 

Lyzenga 1981 pers. obs. Armstrong 1993 Armstrong 1993 Armstrong 1993 pers. obs. pers. obs. pers. obs. pers. obs. Maritorena 1996 Maritorena 1996 Maritorena 1996 Maritorena 1996 Maritorena 1996 Maritorena 1996 Maritorena 1996 Maritorena 1996 
TCI:
Turks and Caicos Islands † Values in parentheses denote in situ measurements in the field. 
Extensions of the Lyzenga method
The major limitation of depthinvariant processing is that turbid patches of water will create spectral confusion. However, where water properties are moderately constant across an image, the method strongly improves the visual interpretation of imagery and should improve classification accuracies (see relevant chapters later). Depthinvariant processing was found to be beneficial in the Turks and Caicos Islands (Mumby et al. 1998) where the horizontal Secchi distance at a depth of 0.5 m is of the order of 30–50 m. At present, it is not possible to give a threshold turbidity with which depthinvariant processing will be ineffective. In the absence of such information, we point out that Tassan (1996) has described a theoretical depthinvariant model for water of greater turbidity than specified by Lyzenga (1981). This method is mathematically complex and still requires field validation. For these reasons, it is not described further here. Those readers attempting to correct for depth effects in turbid waters are urged to consult Tassan’s paper (and possible followups) directly.
An alternative ‘quickfix’ approach may be possible where several water bodies can be clearly distinguished. In this case, the attenuation coefficients for each band will differ in each water type (e.g. seawater verses plumes of riverine water with a high content of yellow substance). If these water bodies are separated by subsetting the imagery (digitising outlines), depthinvariant processing may be applied to each segment individually (i.e. for each set of attenuation coefficients). We have not attempted this process but it might prove feasible under some circumstances.
Conclusion
Depthinvariant processing of digital spectral data is essential for establishing quantitative empirical relationships with marine features. It can also improve the accuracy of marine habitat maps and makes for improved visual interpretation of imagery.
If the study objective involves habitat mapping using multispectral classification (Chapter 10), water column correction is most appropriate for imagery with several waterpenetrating spectral bands (e.g. Landsat TM, CASI).
References
Armstrong, R.A., 1993, Remote sensing of submerged vegetation canopies for biomass estimation. International Journal of Remote Sensing, 14, 621–627.
Jerlov, N.G., 1951, Optical Studies of Ocean Water. Report of Swedish DeepSea Expeditions, 3, 73–97.
Jerlov, N.G., 1964,Optical Classification of Ocean Water. In Physical Aspects of Light in the Sea. (Honolulu:University of Hawaii Press), pp. 45–49.
Jerlov, N.G., 1976, Applied Optics. (Amsterdam: Elsevier Scientific Publishing Company).
Lyzenga, D.R., 1978, Passive remote sensing techniques for mapping water depth and bottom features. Applied Optics, 17, 379–383.
Lyzenga, D.R., 1981, Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data. International Journal of Remote Sensing, 2, 71–82.
Maritorena, S., 1996, Remote sensing of the water attenuation in coral reefs: a case study in French Polynesia. International Journal of Remote Sensing, 17, 155–166.
Mumby, P.J., Clark, C.D., Green, E.P., and Edwards, A.J., 1998, Benefits of water column correction and contextual editing for mapping coral reefs. International Journal of Remote Sensing, 19, 203–210.
Open University, 1989, Seawater: Its Composition,Properties and Behaviour (Exeter:A. Wheaton and Co. Ltd).
Spitzer, D., and Dirks, R.W.J., 1987, Bottom influence on the reflectance of the sea. International Journal of Remote Sensing, 8, 279–290.
Tassan, S., 1996, Modified Lyzenga’s method for macroalgae detection in water with nonuniform composition. International Journal of Remote Sensing, 17, 1601–1607.
Zainal, A.J.M., 1994, New technique for enhancing the detection and classification of shallow marine habitats. Marine Technology Journal, 28, 68–74.