Spaceborne remote sensing data suffer from a variety of radiometric and geometric errors. These distortions would diminish the accuracy of the extracted information and thereby reduce the utility of the data, if the data is not precisely corrected (Nag”’ and Kudrat”’, 1998). And here come the major role of image processing techniques in correcting images from such errors and improving the quality of recorded images. This chapter discusses the sources of errors in satellite images and the methods of corrections so as to get a final product of a corrected satellite image that can be used in a variety of applications.
3.1 Digital image processing
Satellite remote sensing data in general and digital data in particular have been used as basic inputs for the inventory and mapping of natural resources of the earth surface such as agriculture, soils, forestry and geology. The final product in most of the applications (classified outputs) is likely to complement or supplement the map (Reddy, 2008). No imaging system gives images of perfect quality because of degradations caused by various reasons (Chanda and Majumder, 2011). These distortions would diminish the accuracy of the information extracted and thereby reduce the utility of the data, if the data is not precisely corrected (Nag”’ and Kudrat”’, 1998).
In order to update and compile maps with high accuracy, the satellite digital data have to be manipulated using image processing techniques (Reddy, 2008). Image Processing is a technique which is used to enhance raw images received from cameras and sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-to-day life for various applications (Baboo and Devi, 2011).
The central idea behind digital image processing is that, the digital Image is fed into a computer, one pixel at a time. The computer is programmed to insert these data into an equation or a series of equations, and then store the results of the computation for each pixel (Reddy, 2008). Image processing is not a one-step process. There are several steps or procedures that must be performed one after the other until the data of interest is extracted from the observed scene (J”hne, 2005). Virtually, all these procedures may be categorized into one (or more) of the following four broad types of computer assisted operations: image rectification and restoration, image enhancement, image classification, and data merging (Chan, 2011).
The field of Digital Image Processing (DIP) continues to grow at a very rapid rate. This growth is spurred by improvements in the speed of computers and the consequent reduction in the cost of computing power. When combined with technological advances in detectors and electronics (particularly in analog-to-digital converters), the advance in technology now puts digital images at every user’s fingertips (Schott, 2007).
3.2 Image restoration
Image rectification and restoration are required to correct the distorted or degraded image data. It is the initial processing of raw image data to correct for geometric distortions, to calibrate the data radiometrically and to eliminate noise present in the data (Chandra, 2007). Geometric distortion can be inducted into the image by sensor operation, orbital geometry and earth geometry. Examples include the altitude, latitude, velocity of the platform, earth curvature, atmospheric refraction and nonlinearities in the sensor field of view. All of these contribute to distortion. These distortions can be systematic or random. Several techniques exist to correct for geometric distortions.
Radiometric correction can be used to address problems caused by scene illumination, atmospheric conditions, viewing geometry and instrument response. Sun elevation correction can account for seasonal position of the sun relative to the earth. Noise removal can be used to correct striping, boundary and non-systematic variations that cause the images to be snowy. Thus the nature of any particular image restoration process is highly dependent upon the characteristics of the sensor used to acquire the image data (Chan, 2011). These corrections of deficiencies and the removal of flaws present in the data is termed pre-processing because, quite logically, such operations are carried out before the data are used for a particular purpose.
It is difficult to decide what should be included under the heading of ”’pre-processing”’, since the definition of what is, or is not, a deficiency in the data depends to a considerable extent on the use to which those data are to be put. If, for instance, a detailed map of the distribution of particular vegetation types or a bathymetric chart is required then the geometrical distortion present in an uncorrected remotely-sensed image will be considered to be a significant deficiency. On the other hand, if the purpose of the study is to establish the presence or absence of a particular class of land-use (such as irrigated areas in an arid region) then a visual analysis of a suitably-processed false-colour image will suffice and, because the study is concerned with determining the presence or absence of a particular landuse type rather than its precise location, the geometrical distortions in the image will be seen as being of secondary importance (Mather, 2005).
Also it should be emphasized that, although certain preprocessing procedures are frequently used, there can be no definitive list of ”’standard”’ preprocessing steps, because each project requires individual attention, and some preprocessing decisions may be a matter of personal preference. Furthermore, the quality of image data varies greatly, so some data may not require the preprocessing that would be necessary in other instances. Also, preprocessing alters image data. Although that such changes may be assumed beneficial, the analyst should remember that preprocessing may create artifacts that are not immediately obvious. As a result, the analyst should tailor preprocessing to the data at hand and the needs of specific projects, using only those preprocessing operations essential to obtain a specific result (Campbell and Wynne, 2011).
3.3 Geometric correction
Remotely-sensed images are not maps. Frequently, however, information extracted from remotely-sensed images is integrated with map data in a GIS system or presented to consumers in a map-like form. If images from different sources are to be integrated or if pairs of images are to be used to develop digital elevation models (DEMs) then the images from these different sources must be expressed in terms of a common coordinate system. The transformation of a remotely-sensed image so that it has the scale and projection properties of a given map projection is called geometric correction (Mather and Koch, 2011).
Various terms are used to describe geometric correction of imagery, and it is worthwhile defining them:
3.3.1 Registration
Image registration is the process of spatially aligning two or more images of the same scene obtained at different times or from different sensors. This basic processing is important prerequisite for many image analysis applications such as change detection, object identification, image classification. Image registration is a critical component of remote sensing, medical image analysis and industrial imaging etc (Panigrahi et al., 2011).
3.3.2 Rectification
Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface, conform to other images or conform to a map Figure ”’3 1. That is, it is the process by which geometry of an image is made planimetric. It is necessary when accurate area, distance and direction measurements are required to be made from the imagery. It is achieved by transforming the data from one grid system into another grid system using a geometric transformation (Chanagala et al., 2012).
Rectification is not necessary if there is no distortion in the image. For example, if an image file is produced by scanning or digitizing a paper map that is in the desired projection system, then that image is already planar and does not require rectification unless there is some skew or rotation of the image. Scanning and digitizing produce images that are planar, but do not contain any map coordinate information. These images need only to be geo-referenced, which is a much simpler process than rectification (Kumar, 2003).
Figure ”’3 1: Image Rectification (A& B) Input and reference image with GCP locations, (C) using polynomial equations the grids are fitted together, (D) using resampling method the output grid pixel values are assigned (Chanagala, 2012).
The rectification procedure can be divided into two steps: modeling and resampling. During the modeling phase, a priori data and ground control points are used to establish a mathematical model that relates each raw image pixel to the desired coordinate system. Intensity values for the pixel locations in the ouput system are then calculated by the resampling process (Westin, 1990).
The process of the geometric rectification of remote-sensing imagery proceeds as follows: (1) Determine an appropriate geometric processing model between the image-space coordinates and the object-space coordinates according to the imaging mode of the remote-sensing imagery; (2) Confirm the geometric rectification formulas according to the above geometric processing models; (3) Implement adjustments to solve the model parameters according to the coordinates of the GCPs and the corresponding image points, and evaluate the accuracy; (4) Implement the geometric transformation and resampling of the original image (Liang et al., 2012).
3.3.3 Resampling
Resampling is the process of assigning digital numbers (DNs) to the pixels in an image that has been spatially transformed by geometric correction, using the digital numbers of the original (untransformed) image as input (Rees, 1999). A number of different resampling schemes can be used to assign the appropriate DN to an output cell or pixel. To illustrate this, consider the shaded output pixel shown in Figure ”’3 2. The DN for this pixel could be assigned simply on the basis of the DN of the closest pixel in the Input matrix, disregarding the slight offset. In our example, the DN of the input pixel labeled a would be transferred to the shaded output pixel. This approach is called nearest neighbor resampling. It offers the advantage of computational simplicity and avoids altering the original input pixel values. However, features in the output matrix may be offset spatially by up to one-half pixel. This can cause a disjointed appearance in the output image product (Lillesand et al., 2008). This pixel is then transferred to the corresponding display grid location. This is the preferred technique if the new image is to be classified since it then consists of the original pixel brightnesses, simply rearranged in position to give correct image geometry (Richards and Jia, 2006).
Figure ”’3 2: Matrix of geometrically correct output pixels superimposed on matrix of original distorted input pixels (Lillesand et al., 2008).
More sophisticated methods of resampling evaluate the values of several pixels surrounding a given pixel in the input image to establish a “synthetic” DN to be assigned to its corresponding pixel in the output image. The bilinear interpolation technique takes a distance-weighted average of the DNs of the four nearest pixels (labeled a and b in the distorted-image matrix in Figure ”’3 2). This process is simply the two-dimensional equivalent to linear interpolation. This technique generates a smoother appearing resampled image. However, because the process alters the gray levels of the original image, problems may be encountered in subsequent spectral pattern recognition analyses of the data. Because of this, resampling is often performed after rather than prior to image classification procedures. An improved restoration of the image is provided by the bicubic interpolation or cubic convolution method of resampling. In this approach, the transferred synthetic pixel values are determined by evaluating the block of 16 pixels in the input matrix that surrounds each output pixel (labeled a, b, and c in Figure ”’3 2). Cubic convolution resampling avoids the disjointed appeal-mice of the nearest neighbor method and provides a slightly sharper image than the bilinear interpolation method. Again, this method alters the original image gray levels to some extent and other types of resampling can be used to minimize this effect (Lillesand et al., 2008).
3.3.4 Orthorectification
Orthorectification is the process of image correction, pixel-by-pixel, for topographic distortion. The result, in effect, is that every pixel appears to view the earth from directly above i.e. the image is in an orthographic projection (Schowengerdt, 2006).
The orthrectification of aerial and satellite images is an economical method of producing geoin-formation by transforming a central perspective into an orthogonal projection. A frame aerial image constitutes one central perspective of the terrain, while by linear array images each line of each scene is a different central perspective. The result of the orthorectification, in both cases, is an image with homogeneous scale, where objects lie in the correct planimetric position, and with the same information density of the original aerial image. This image is denominated orthoimage and is geometrically a map like product.
Figure ”’3 3 illustrates the required sequence of analytical transformations for the orthoimage generation. The output image pixels are associated with the object coordinate system. A pixel (X, Y), still without information (colour), is projected in the DTM, which yields a Z value. The terrain point (X, Y, Z) is then transformed by means of collinearity equations to the input aerial image whose interior and exterior orientations are known. From the neighbouring pixels of the calculated image position (x, y), an interpolated grey value gint is calculated and then attributed to the initial pixel (X, Y) of the output image (Xu, 2013).
Figure ”’3 3: Orthoimage generation (Xu, 2013).
Why does the geometric correction process seem to be more important today than before? In 1972, the impact of the geometric distortions was quite negligible for different reasons:
”’ The images, such as Landsat-MSS, were nadir viewing and the resolution was coarse (around 80-100 m);
”’ The products, resulting from the image processing were analog on paper;
”’ The interpretation of the final products was performed visually; and
”’ The fusion and integration of multi-source and multi-format data did not exist (Toutin, 2004).
Now, the impacts of distortions, although they are similar, are less negligible because:
”’ The images are off-nadir viewing and the resolution is fine (sub-meter level);
”’ The products resulting from image processing are fully digital;
”’ The interpretation of the final products is realized on computer;
”’ The fusion of multi-source images (different platforms and sensors) is in general use; and
”’ The integration of multi-format data (raster/vector) is a general tendency in geomatics (Wulder and Franklin, 2003).
3.3.4.1 True Orthoimage
The quality of an orthoimage depends on the resolution of the image, on the accuracy of the orientation parameters and on the quality of the digital elevation model used in the rectification algorithm. Normal orthoimages are produced using a DTM, correcting the images for radial distortion caused by relief at ground level. In urban areas, however, an orthoimage produced in this way seldom yields a satisfying product, because elevated objects may appear more or less leaning, depending on the position relative to the image centre they had in the original image. The top of the objects is not imaged at the correct planimetric position, only the ground. Furthermore, there is no ground information ‘under’ the leaning buildings. Wider objective angles and lower flying heights increase the leaning buildings effect (Xu, 2013).
3.4 Geometric errors
Geometric errors could be grouped into two main categories: (1) systematic errors; and (2) unsystematic errors (Jaber, 2006). Systematic errors are due to image motion caused by forward movement of the aircraft or spacecraft, variations in mirror scanning rate, panoramic distortions, variations in platform velocity and distortions due to the curvature of Earth (Khorram et al., 2012). Systematic errors are usually corrected at the satellite ground stations before distributing remotely sensed data to the public using information from platform ephemeris (information about the geometric characteristics of sensor and the Earth at data acquisition) and knowledge of internal sensor distortion (Jaber, 2006). On the other hand, non-systematic errors are mainly caused by variation through time in the position and attitude angles (roll, pitch and yaw) of the satellite platform. Without accurate sensor platform orientation parameters, these errors can only be corrected with the use of Ground Control Points (GCPs) and a suitable precision photogrammetric or empirical model (Finkl, 2013).
To appreciate why geometric distortion occurs, it is necessary to see how an image is formed from sequential lines of image data. If a particular sensor records L lines of N pixels each then it would be natural to form the image by laying the L lines down successively one under the other. If the IFOV of the sensor has an aspect ratio of unity the pixels are the same size along and across the scan then this is the same as arranging the pixels for display on a square grid as shown in Figure ”’3 4. The grid intersections are the pixel positions and the spacing between the grid points is equal to the sensor”’s IFOV (Richards, 2012).
Figure ”’3 4: Display grid commonly used to build up an image from the digital data stream of pixels generated by a sensor (Richards and Jia, 2006).
3.4.1 Earth Rotation Effects
Earth rotates beneath the sensor that scans the terrain. Thus there is a gradual westward shift of the ground swath being scanned. This causes along scan distortion (Sahu, 2007). Therefore if the lines of image data recorded were arranged for display in the manner of Figure ”’3 4 the later lines would be erroneously displaced to the east in terms of the terrain they represent. To give the pixels their correct positions relative to the ground it is necessary to offset the bottom of the image to the west by the amount of movement of the ground during image acquisition, with all intervening lines displaced proportionately as depicted in Figure ”’3 5. The amount by which the image has to be skewed to the west at the end of the frame depends upon the relative velocities of the satellite and earth and the length of the image frame recorded (Richards, 2012).
Figure ”’3 5: the effect of earth rotation on scanner imagery. a lines are arranged on a square grid; b offset of successive lines to the west to correct for the rotation of the earth”’s surface during the frame acquisition time (Richards and Jia, 2006).
3.4.2 Panoramic distortion
In case of scanners used on spacecraft and aircraft as remote sensing platforms the angular IFOV is constant. As a result the effective pixel size on the ground is larger at the extremities of the scan than at nadir as illustrated in Figure ”’3 6 (Sharma and Binda, 2007). When the image data is arranged to form an image, as in Figure ”’3 4, the pixels are all written as the same size spots on a photographic emulsion or are displayed as the same pixel size on a colour display device. Therefore the displayed pixels are equal across the scan line whereas the equivalent ground areas covered are not. This gives a compression of the image data towards its edges (Richards, 2012).
Figure ”’3 6: Effect of scan angle on pixel size at constant angular instantaneous field of view (Richards and Jia, 2006).
3.4.3 Earth”’s curvature
Aircraft scanning systems are not affected by earth curvature because of their low altitude. Neither are space systems such as Landsat and SPOT, again because of the narrowness of their swaths (Richards and Jia, 2006). The satellite sensing devices with wide swath and having constant field of view look at the planar patch on the earth surface, but at the boundaries of swath, pixels are subjected to stretching and contraction due to earth curvature: as a result, the image gets distorted (Chanda and Majumder, 2011).
Figure ”’3 7: Effect of earth curvature on the size of a pixel in the scan direction (Gomarasca, 2009).
3.4.4 Variations in platform altitude, velocity and attitude
A remote sensor is designed to operate at a certain altitude and velocity combination. Variations in these parameters produce geometric distortion in the images or over-/under-coverage. They are typically non-systematic. The geometric distortions depend on the type of sensor (Gupta, 2003).
3.4.4.1 Variations in platform altitude
Variation in altitude of the satellite results in change in geometric resolution. Departure of the sensor platform from its normal altitude or the terrain increases in elevation produces scale distortions in the remotely sensed data. This in turn will result in change in swath and hence in scale in the across-track direction. This error is compensated by shifting pixels in opposite direction (Sahu, 2007).
3.4.4.2 Variations in platform velocity
Variation in altitude causes variation in velocity of the spacecraft (Sahu, 2007). Platform velocity variations can change the line spacing or create line gaps/overlaps. Variations in spacecraft velocity only cause distortions in the along track direction (Weng, 2011). If the spacecraft velocity departs from its nominal, the ground track covered by a fixed number of successive mirror sweep changes. This will result in a change in resolution along-track and hence change in scale in the along track direction (Sahu, 2007).
3.4.4.3 Variations in platform attitude
One of the most important parameters governing the geometric quality of remote sensing images is the orientation of the optical axis. When the optic axis is vertical, image data has high geometric fidelity. Many of the sensors are designed to operate in this mode (Gupta, 2003). Since the space craft departs from this normal position, geometric distortion inherits in the remote sensing data. This type of distortion is uncertain and unpredictable (Nag”’ and Kudrat”’, 1998). However, sensor platform instability may lead to angular distortions. Any angular distortion can be resolved into three components: pitch, roll and yaw Figure ”’3 8 (Gupta, 2003).
Figure ”’3 8: Schematic of pitch, roll, and yaw distortions. a is the nominal ground: b, c and d show the pitch, roll and yaw distortions in photographs (interframe type); e, f and g show the same in scanner image data ( intraframe type) (Gupta, 2003).
The roll is the rotation around the flight vector, hence in a “wing down” direction, its variation causes lateral shifts and scale changes in the across-track direction (Weng, 2011). The positive roll will shift the pixels towards right. Pitch error is due to rotation of the spacecraft about the line perpendicular to the direction of motion. This will shift the pixels in the along-track direction. Positive pitch angle will shift the image in the direction of motion of satellite Figure ”’3 8 (Sahu, 2007). The yaw is the rotation around the vertical axis and its variation changes the orientation of each scanned line, resulting in a skew between the scan lines (Weng, 2011).
While these variations can be described mathematically, at least in principle, the knowledge of the platform ephemeris is required to enable their magnitudes to be computed. In the case of satellite platforms ephemeris information is often telemetered to ground receiving stations. This can be used to apply corrections before the data is distributed (Richards and Jia, 2006).
3.5 Radiometric errors
The pixel value recorded at an image point is the average reflected brightness from a specific region on the earth. There always exists an atmospheric medium through which the reflected intensity travels and the consequent distortion in the pixel value is termed radiometric distortion (Chanda and Majumder, 2011).
Distortions caused by radiometric errors listed in Table ”’3 1 generate effects on the radiometric values of the image pixels, inducing a non-representative distribution of the spectral band brightness. This distortion, which can be eliminated or reduced by radiometric preprocessing, depends on:
”’ Errors introduced by the sensors’ malfunctioning during the acquisition;
”’ Geometric characteristics of the acquisition system and sun position, in particular its inclination measured by the zenith angle;
”’ Atmospheric layer between the sensor and the detected scene which affects the spectral responses as well as decreases the contrast in the scenes. The atmosphere influences remotely sensed data acting as a barrier, limiting the electromagnetic wave propagation by absorption phenomena and acting as an unreal source of energy by scattering phenomena (Gomarasca, 2009).
Table ”’3 1: Radiometric effects (Gomarasca, 2009).
Error Cause Type of distortion
Radiometric Sensor -Radiometric calibration of the sensor.
-Anomalies in the scansion (line striping effect).
Geometry of the system -Effect of the Sun angle elevation.
-Soil inclination (leaning)
Atmosphere -Radiation absorption (subtractive).
-Atmospheric diffusion (additive).
3.6 Noise removal
One of the most important considerations in processing of digital image is noise.Which is irrelevant or meaningless data. Noise in a remotely sensed image is referred to introduction of any unwanted signal or disturbance due to the sensor response and data recording process, which is independent of the scene signal. Despite, noise can also be random or repetitive events that obscure or interfere with the desired information. Noise creates inconsistencies in image brightness values or DNs that may limit the ability to interpret or quantitatively process and analyze digital remotely sensed images. It can either degrade or totally mask the true radiometric information content of a digital image. In all image processing systems, one must consider how much of the detected signal can be regarded as true and how much is associated with random background events resulting from either the detection or transmission process.
When a digital image is recorded by the sensor on a satellite or aircraft, it may contain noise introduced errors in the measured brightness values of the pixels. Additive noise can be of basically two types in image data: random; and non-random noise (Sahu, 2007).
3.6.1 Random noise removal
Random noise is characterized by nonsystematic variations in gray levels from pixel to pixel called bit errors or shot noise (because this noise makes the image appear as if shot by a shotgun). Such noise is often referred to as being “spiky” in character, and it causes images to have a salt and pepper or snowy appearance. Bit errors are handled by recognizing that noise values normally change much more abruptly than true image values. Thus, noise can be identified by comparing each pixel in an image with its neighbors. If the difference between a given pixel value and its surrounding values exceeds an analyst-specified threshold, the pixel is assumed to contain noise. The noisy pixel value can then be replaced by the average of its neighboring values. Moving neighbor-hoods or windows of 3 x 3 or 5 x 5 pixels are typically used in such procedures. Figure ”’3 9 illustrates the results of applying noise reduction algorithm (Lillesand et al., 2008).
Figure ”’3 9: Result of applying noise reduction algorithm: (a) original image data with noise-induced “salt-and-pepper” appearance; (b) image resulting from application of noise reduction algorithm (Lillesand et al., 2008).
3.6.2 Non-random noise removal
(1) Scan line or periodic line dropouts: In the digital image, among all the scan lines, one single line may be defective. This defect may be caused by the data from one of the detectors due to sensor transmission or recording problem. The defective scan line after a definite interval will give a string of zeros giving a black line on the image in a systematic pattern. This is called scan line or periodic line dropouts. The restoring process involves the calculation of average DN value for each scan line in the whole scene. The average DN value pertaining to every scan line is compared with the average DN value of the entire scene. The scan line having the value less than the average by more than a specific value is defined as defective. After identification of the defective line, the correction is done by computing the average DN value of the immediate upper and lower scan lines of the defective line and simply putting these average DN values within this defective line. After correction the image with this artificial data is improved manifold. This type of correction by the artificial data is known as cosmetic correction (Choudhury et al., 2008).
(2) Striping or banding is a systematic noise type and is related to sensors that sweep multiple scan lines simultaneously. This stems from variations in the response of the individual detectors used within each band. For example the radiometric response of one of the six detectors of the early Landsat MSS sensor tended to drift over time (Figure ”’3 10 left). This resulted in relatively higher or lower values along every sixth line in the image data. A common way to destripe an image is the histogram method (Clevers, 2006).
Figure ”’3 10: Examples of image noise showing (left) the striping effect for Landsat MSS and (right) dropped lines for Landsat TM (CCRS, 2007).
Essay: Digital Image Processing
Essay details and download:
- Subject area(s): Information technology essays
- Reading time: 16 minutes
- Price: Free download
- Published: 28 July 2016*
- Last Modified: 23 July 2024
- File format: Text
- Words: 4,554 (approx)
- Number of pages: 19 (approx)
Text preview of this essay:
This page of the essay has 4,554 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, Digital Image Processing. Available from:<https://www.essaysauce.com/information-technology-essays/digital-image-processing-2/> [Accessed 22-01-25].
These Information technology essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.