Environment and development
in coastal regions and in small islands

Coastal management sourcebooks 3
Part 1
Remote Sensing for Coastal Managers An Introduction 

1 Introduction to Remote Sensing of Coastal Environments

Summary This introduction is aimed primarily at decision makers and managers who are unfamiliar with remote sensing. It seeks to demonstrate to non-specialists the essential differences between digital images, such as those generated by most satellite sensors, and photographic pictures. Firstly, what is meant by remote sensing of tropical coastal environments is briefly defined. Key reference books which give a grounding in the fundamental theoretical concepts of remote sensing are then listed. Finally, electromagnetic radiation (EMR), the interactions between it and objects of coastal management interest, and multispectral digital imagery are introduced using a hypothetical image of a small atoll as an example. 

Introduction

Remote sensing of tropical coastal environments involves the measurement of electromagnetic radiation reflected from or emitted by the Earth’s surface and the relating of these measurements to the types of habitat or the water quality in the coastal area being observed by the sensor. Some unconventional definitions of remote sensing culled from a recent conference are given below. Although tongue-in-cheek, there are perhaps grains of truth in many of them.

Remote sensing is . . .

One of the aims of this Handbook is to make sure that the more derogatory definitions cannot be applied in earnest!

The theoretical fundamentals of remote sensing are very well-covered in several textbooks and in this introductory chapter we will not attempt to duplicate the many excellent expositions that already exist on this topic. Some key reference books, each of which provides a good grounding in the theoretical concepts underlying remote sensing, are listed in Table 1.1.

Table 1.1 Key reference books on remote sensing listed in reverse order of date of publication. Costs and ISBN numbers are for paperback editions where these exist and for cloth editions otherwise. Prices are for mid-1998 in pounds sterling or US dollars (except for Mather).
Reference ISBN Cost
Mather, P.M. (1999). Computer Processing of Remotely Sensed Images: An Introduction. (Second Edition). New York: Wiley (Includes CD-ROM) 0-471-98550-3 £29.99
Sabins, F.F. (1996). Remote Sensing. Principles and Interpretation. (Third Edition). San Francisco: Freeman 0-7167-2442-1 £32.95
Wilkie, D.S. and Finn, J.T. (1996). Remote Sensing Imagery for Natural Resources Monitoring: A Guide for First-time Users. NewYork: Columbia University Press   0-231-07929-X £22.00
Jensen, J.R. (1995). Introductory Digital Image Processing. A Remote Sensing Perspective. (Second Edition). Englewood Cliffs: Prentice-Hall  0-13-489733-1 $66.67
Richards, J.A. (1995). Remote Sensing Digital Image Analysis: An Introduction. (Second Edition). NewYork: Springer-Verlag  0-387-58219-3 $59.95
Lillesand, T.M. and Keifer, R.W. (1994). Remote Sensing and Image Interpretation. (Third Edition). NewYork: Wiley  0-471-57783-9 £24.95
Barrett, E.C. and Curtis, L.F. (1992). Introduction to Environmental Remote Sensing. (Third Edition). London: Chapman and Hall  0-412-37170-7 £35.00
Cracknell, A.P. and Hayes, L.W. (1990). Introduction to Remote Sensing. London: Taylor and Francis  0-85066-335-0 £18.95
Harrison, B.A. and Judd, D.L. (1989). Introduction to Remotely Sensed Data. Canberra: Commonwealth Scientific and Industrial Research Organisation  0-643-04991-6 £43.95
Open Universiteit (1989). Remote Sensing. Course Book and Colour Images Book. Heerlen: Open Universiteit. (Available from The Open University, Milton Keynes,  UK as PS670 study pack) 90-358-0654-9
90-358-0655-7

£55.00
Curran, P.J. (1986). Principles of Remote Sensing. London: Longman  0-582-30097-5 £20.99

In the section on How to Use this Handbook we singled out Mather (1999) and the ERDAS Field Guide as companion volumes for use with the Handbook but any one of the other texts listed in Table 1.1 would provide an adequate replacement for Mather (1999).

The aims of this introduction are to outline in simple terms to coastal managers or decision makers, unfamiliar with remote sensing, such fundamental concepts as what we mean by electromagnetic radiation (EMR), the interaction of EMR and objects of coastal management interest on the Earth’s surface (e.g. mangrove forests, houses, shrimp ponds, roads, coral reefs, beaches), and how digital images are gene rated from these interactions by sensors on aircraft or satellites. Analogue images such as colour aerial photographs are not discussed here because most people are already sufficiently familiar with cameras to understand these techniques intuitively.

Electromagnetic radiation (EMR) 

EMR ranges from very high energy (and potentially dangerous) radiation such as gamma rays and X-rays, through ultra-violet light which is powerful enough to give us sunburn, visible light which we use to see, infra-red radiation part of which we can feel as heat, and microwaves which we employ in radar systems and microwave cookers, to radio waves which we use for communication (Figure 1.1). All types of EMR are waveforms travelling at the speed of light and can be defined in terms of either their wavelength (distance between successive wave peaks) or their frequency (measured in Hertz (abbreviated Hz), equivalent to wavelengths travelled per second). Shorter wavelength radiation (infra-red or shorter) tends to be described in terms of its wavelength; thus visible light sensors will have their wavebands listed in nanometres or micro-metres (see Table 1.2). For example, blue light has wavelengths of 455–492 nm, red has wavelengths of 622–780 nm, and thermal infra-red has wavelengths of 3–15 µm (Figure 1.1). Longer wavelength radiation (e.g. microwave radiation such as radar, and radio frequencies) is often described in terms of its frequency, for example, the synthetic aperture radar on the European Remote-Sensing Satellite operates at 5.3 GHz). 

Figure 1.1 The electromagnetic spectrum in the ultra-violet to microwave (0.3 nm to 100 cm) wavelength range to show atmospheric transmission windows, radiation absorbing gases which block transmission at specific wavelengths, and the wavebands of a rahge of remote sensing systems used in coastal management related applications. The numbered boxes after four of the satellite sensors indicate the standard waveband designations. MSS = Multispectral Scanner, TM = Thematic Mapper, SPOT = Satellite Pour l'Observation de la Terre, XS = the multispectral sensor on SPOT, Pan = the panchromatic sensor on SPOT, NOAA = National Atmospheric and Oceanographic Administration, AVHRR = Advanced Very High Resolution Radiometer, SeaWiFS = Sea-viewing Wide Field-of-view Sensor (on SeaStar satellite), ERS SAR = European Remote Sensing satellite Synthetic Aperture Radar (in imaging mode). Full details of sensors and satellites can be found in Chapter 3. 
 
Table 1.2 Some common prefixes for standard units of measurement.
Prefix Symbol Multiplier Common name
giga G 109 billion (US)
mega M 106 million
kilo k 103 thousand
centi c 10-2 hundredth
milli m 10-3 thousandth
micro µ 10-6 millionth
nano n 10-9 thousand millionth

Various types of sensor can measure incoming EMR in different parts of the electromagnetic spectrum but for most coastal management applications we are primarily interested in sensors measuring visible or near infra-red light reflected from the Earth’s surface or emitted thermal infra-red radiation, although radar wavelengths are providing increasingly useful data.

Visible and near infra-red sensors measure the amount of sunlight reflected by objects on the Earth’s surface, whereas thermal infra-red sensors detect heat emitted from the Earth’s surface (including surface waters). These sensors are known as passive sensors, as the radiation sources are the Sun and Earth respectively and the sensor does no more than passively measure the reflected or emitted radiation. Microwave (radar) sensors, by contrast, send out radar pulses (with wavelengths usually in the centimetre to tens of centimetre range, equivalent to frequencies in the GHz to tens of GHz range) and measure the echoes reflected back from the Earth’s surface and objects on it. Such sensors are termed active sensors as they also provide the source of the radiation being measured.

The atmosphere lets visible radiation through, except when cloudy, but water vapour, carbon dioxide and ozone molecules absorb EMR, particularly in the infra-red part of the spectrum (hence these molecules are ‘greenhouse’ gases), making the atmosphere opaque to certain wavelengths (Figure 1.1). Consequently satellite sensors are designed to sense in so-called transmission ‘windows’ where the atmosphere allows radiation through. Note in Figure 1.1 how the Landsat satellite’s Thematic Mapper sensors for bands 5, 6 and 7 make use of three such windows in the infra-red part of the spectrum. Microwave radiation has the advantage over visible light that it can be sensed through cloud but the disadvantage for coastal applications that it does not penetrate water.  

Interactions of electromagnetic radiation and the Earth’s surface 

To illustrate how a digital image is created we will consider a hypothetical sensor observing a small imaginary atoll in the Indian Ocean and consider the interactions of visible light with five types of habitat/object on the Earth’s surface (Figure 1.2, Plate 1). The image covers an area of 800 x 800 m. The hypothetical sensor is similar to our own eyes in that it has detectors which sense light in three different wavebands: blue, 450–480 nm; green, 530–560 nm; and red, 625–675 nm. For colour vision our eyes have three types of light-sensitive nerve cells called cones which have peak sensitivities at about 450 nm, 550 nm and 610 nm. The relative stimulation of each type of cone by reflected or emitted light allows us to perceive colour images of the world around us.

On our imaginary atoll each different type of habitat/object reflects or absorbs incident sunlight in a different way and will appear different to the blue, green and red detectors (Figure 1.2). For example, the deeper water appears relatively bright in the blue waveband, much darker in the green and very dark in the red. This is because blue light penetrates water best and is scattered back to the sensor whereas red light is rapidly absorbed in the top few metres. By contrast, the reddish roofed buildings appear bright in the red and very dark in both the green and blue wavebands. The red paint basically absorbs green and blue light but is a powerful reflector of red light. The whitish coral sand beach appears bright in all three wavebands. The green leaves of the trees in the forested area reflect a lot of green radiation and some in the blue but almost no red wavelengths. The shallow sandy lagoon area reflects considerable amounts of blue and green light but is still deep enough that most red light is absorbed in the water column, giving it a characteristic turquoise colour in the colour composite image

Digital images  

The hypothetical sensor is a multispectral scanner which has a field-of-view such that at any moment in time it is viewing a 2 x 2 m square on the Earth’s surface and recording how much blue, green and red light is being reflected from that square. Each square is known as a picture element or pixel. The majority of current multispectral sensors have for each waveband a line of detectors, each viewing one pixel (in this case one 2  x  2  m square). Our hypothetical sensor views a 800 m (400 pixel) wide swath of the Earth’s surface in each of three wavebands and would thus need three rows of 400 detectors, one row measuring reflected blue light, one for green and one for red. As the platform carrying the sensor (which may be an aircraft or satellite) moves over the atoll it records the amount of light being reflected by line after line of 2  x  2  m squares which together make up the image (or picture). Each of the images in Figure 1.2 (Plate 1) is thus made up of 400 x 400 pixels, or 160,000 picture elements.

For each pixel the amount of light reflected in the blue, green and red wavebands is recorded by detectors as a digital number (DN) which, for this sensor, is a number between 0 and 255, with zero (displayed as black) indicating no detectable reflectance and 255 (displayed as white) indicating the maximum level of reflectance recordable by the detector. Many satellite sensors use the same range of brightness levels (radio-metric resolution) although others may use less (e.g. 64 brightness levels) or more (e.g. 1024 brightness levels). In Figure 1.3 (Plate 2), groups of 5 x 5 pixels have been merged together to allow the values of pixels in the inset area to be displayed more easily. The merged pixels are thus 10 x 10 m in size and the inset area covers a 130 x 130 m block.

The advantages of the digital image (i.e. an image made up of numbers) is that we can define objectively the spectral characteristics of different habitats and process the images using computers. For our hypothetical image the unique spectra which distinguish the five types of surface can be obtained from Figure 1.3 (Plate 2) and are given in the associated table. By empirically relating such spectra to known types of habitat at known positions during field survey, a computer can be used to classify all pixels in an image, creating a habitat map. In real life, an entire image may cover tens or hundreds of kilometres, which means that remote sensing effectively extends the results of limited field survey to much larger areas. In reality the spectra will be much less clear-cut than those shown in Figure 1.3 and there will be considerable variation in pixel DN values within individual habitat types; however, the essential concept holds true and forms the basis of image classification (Chapter 10). You will notice that four of our merged pixels in Figure 1.3 are composed of mixtures of two different habitats and do not fall into the five main classes. Such pixels are known as ‘mixels’ and are a particular problem with low spatial resolution digital images where individual pixels may cover two or more habitat types. 

Conclusion  

This brief chapter has sought to introduce you to the basic concepts underlying remote sensing and digital images. These concepts, where needed for practical remote sensing of tropical coastal environments, will be expanded on in subsequent chapters but if you would like to delve deeper into the theoretical background you are referred to the texts listed in Table 1.1. This Handbook will concentrate on the practicalities of remote sensing of tropical coastal areas, as these have been relatively neglected; the theory is well-served else-where. The take-home message is that digital images are made up of numbers and to unlock their potential one needs to process the raw data on computers. Purchasing photographic prints derived from these digital images negates 90% of the advantages gained from being able to utilise such imagery.

Start Introduction Activities Publications Search
Wise Practices Regions Themes