Sunteți pe pagina 1din 137

Image processing and data Interpretation

Mahesh Pal NIT Kurukshetra

Remote Sensing: Definition


To acquire data about an object without being in contact with it. The art, science, and technology of obtaining reliable information about physical objects and the environment, through the process of recording, measuring, and interpreting imagery and digital representations of energy pattern derived from non-contact sensor systems (Colwell, 1997, ASPRS )

Remote sensing system


Different stages in remotes sensing 1. Source of electromagnetic energy 2. Transmission of energy into atmosphere from source to earth. 3. Energy interaction with earth (reflection, absorption, transmission). 4. Transmission of reflected/emitted energy towards sensor. 5. Detection of energy by sensor, converting it in to electrical output or image. 6. Transmission/recording of the sensor output. 7. Pre-process of the data and production of data product. 8. Collection of ground truth and other information. 9. Data analysis and interpretation. 10. Integrating interpreted images with other data for final application

Fundamental of remote sensing by George Joseph, 2005 (source NPTEL, IIT Kanpur)

Electromagnetic spectrum

Name Optical wavelength (a) Reflective portion (i) Visible (ii) Near IR

Wavelength ( m) 0.30 - 15 0.4 - 3.0 0.4 - 0.7 0.7 - 1.30

(iii) Middle IR

1.30 - 3.0

(b) Far IR (Thermal, Emissive)

7.00 - 15.0

Creating Landsat Images from Raw Data: San Francisco - Oakland

Microwave region

Remote sensing data


Data collected by sensor onboard the space/air/terrestrial platform is available in the form of digital images. Received data is processed to derive useful information about earth features. To interpret these images suitable corrections, enhancements, and classification techniques are used. A typical image interpretation may involve manual and digital (computer assisted) procedures .

Concept of digital Image


Digital images are array of Number and a file containing numbers that constitute gray level values or digital number (DN) values. An Image is represented as a matrix of row and column. Image is a pictorial representation of pattern of landscape which is composed of elements- indicators of things and events which reflect physical, biological, and cultural components of landscape. Similar conditions in similar surroundings reflect similar patterns and unlike conditions reflect unlike patterns.

A sample image of Landsat ETM+ data

Each DN in this digital image corresponds to one small area of the visual image and provide the level of darkness or lightness of the area.
Higher the DN value, the brighter the area. Hence the zero value represents a perfect black. The maximum value represent perfect white and the intermediate values are shades of gray.

Band 1 pixel values

Landsat ETM+ image of Littleport in Cambridgeshire (England), acquired on 19 June 2000. Band combination 4,3,2

Radar intensity image of same study area

Digital image processing


Digital image processing involves manipulation and interpretation of digital images with the use of computer algorithms. It is an extremely broad subject with the involvement of mathematical computations . Digital image processing has many advantages over analog image processing. It allows using a wider range of algorithms to be applied to the input data

Pre-processing

Operation involved in preprocessing aims to correct degraded image acquired from sensor (raw image) to create a better presentation of original image. These operations are called pre-processing because they normally precede further manipulation and analysis of image data to extract specific information.

Radiometric correction Atmospheric correction Geometric correction

Radiometric correction
Are used to modify DN values in order to account for noise, that is, contributions to the DN that are a function NOT of the feature being sensed but due to: The intervening atmosphere The sun-sensor geometry The sensor itself errors and gaps

Radiometric correction

Missing lines Striping Illumination and view angle effects Sensor calibration Terrain effects

Missing scan line


Occurs due to error in scanning or sampling equipment of the sensor. Pixel values are generally missing in these lines. Easiest method is to replace the missing value by the corresponding pixel on the immediately preceding scan line

Source: earthobservatory.nasa.gov

Stripping or banding
Occurs due to non-identical detector response if a detector of a electromechanical sensor goes out of adjustment and provide reading less than or greater than other detectors fro same band over the same ground cover. Several methods are proposed to remove this error. Linear method-assume a linear relation between input and output Histogram matching- Histogram of each line is created. Stripes are characterized by distinct histogram.

transmission striping in Landsat 2 MSS Red Green and Blue value

Illumination and View angle correction


position of the sun relative to the earth changes depending on time of the day and the day of the year. In the northern hemisphere the solar elevation angle is smaller in winter than in summer. An absolute correction involves dividing the DN-value in the image data by the sine of the solar elevation angle

Sensor calibration
necessary to generate absolute data on physical properties Reflectance Temperature Emissivity Backscatter Values provided by data provider / agency

Terrain effects
Cause differential solar illumination Some slopes receive more sunlight than others Magnitude of reflected radiance reaching the sensor Topographic slope and aspect introduces Radiometric distortion Bidirectional reflectance distribution function (BRDF) Require DEM and sun elevation for correction

BRDF
BRDF gives the reflectance of a target as a function of illumination geometry and viewing geometry. The BRDF depends on wavelength and is determined by the structural and optical properties of the surface, such as shadow-casting, multiple scattering, mutual shadowing, transmission, reflection, absorption and emission by surface elements, facet orientation distribution and facet density. The BRDF is needed in remote sensing for the correction of view and illumination angle effects

Radiometric correction
Atmosphere leads to selective scattering absorption and emission. Total radiance received at sensor depends on ground radiance (direct reflected) and scattered radiation from the atmosphere (path radiance). relationship between radiance received at the sensor (above atmosphere) and radiance leaving the ground

Ls at sensor radiance, H total downwelling radiance (incident energy), reflectance of target (albedo), T atmospheric transmittance, Lp atmospheric path radiance (wavelength dependent)

In solar reflection region, scattering is most dominant causing path radiance. Path radiance causes (1) Reduction in contrast due to masking effect, causing dark object appearing less dark and bright object less bright. and (2) adjacency affectatmospheric scattering may direct some radiation away from the sensor FOV-cause a decrease in spatial resolution of sensor. Two methods for correction: Dark object subtraction and using atmospheric models such as MODTRAN (http://modtran.org/), 6S (Second Simulation of a Satellite Signal in the Solar Spectrum; http://rtcodes.ltdri.org/)

Dark object subtraction


The most The most common atmospheric effect on remotely-sensed imagery is an increase in DN values due to haze, etc. This increase represents error and should be removed Dark object subtraction simply involves subtracting the minimum DN value in the image from all pixel values This approach assumes that the minimum value (i.e. the darkest object in the image) should be zero The darkest object is typically water or shadow

Atmospheric corrections are needed in three cases:


1. When we want ratio of two bands (scattering is inversely
proportional to wavelength shorter wavelength more scattering)

2. When we want to compare upwelling radiance from a surface to some property of that surface in terms of physically based model. 3. When comparing satellite data acquired at different date as state of atmosphere changes from time to time.

4. Radiometric corrections for illumination, atmospheric influences, and sensor characteristics are done prior to distribution of data to the user.

Atmospheric correction Beijing, China, May 3, 2001

Geometric correction
Geometric correction is the process of rectification of geometric errors introduced in the imagery during the process of its acquisition. It is the process of transformation of a remotely sensed image so that it has the scale and projection properties of a map. A related technique called registration is the fitting of the coordinate system of one image to that of a second image of the same area. Geocoding and georeferencing are the often-used terms in connection with the geometric correction process. The basic concept behind geocoding is the transformation of satellite images into a standard map projection so that image features can be accurately located on the earth's surface, and the image can be compared.

Scan Skew: The main source of geometric error in satellite data is satellite path orientation (non-polar).

Reasons for image rectification


For scene to scene comparison of individual pixels in applications such as change detection or thermal inertia mapping. For GIS data for GIS modeling. For identifying training samples according to map coordinates. For creating accurate scaled photomaps. To overlay an image with vector data such as ARC/INFO. For extracting accurate area and distance measures. For mosaicing. To compare images that are originally at different scales.

The geometric correction process involves in 1. Identifying the image coordinates of several clearly identifiable points, called ground control points (or GCPs), in the distorted image (A - A1 to A4). 2. The true ground coordinates are obtained from a map (B - B1 to B4, or another image ), matching them to their true positions in ground coordinates This is called image-tomap or image to image registration.
To geometrically correct the original distorted image, resampling is used to determine the DN values of new pixel locations of the corrected image. The resampling process calculates the new pixel values from the original digital pixel values in the uncorrected image. Nearest neighbour, bilinear interpolation, and cubic convolution are three resampling methods. Nearest neighbour method uses the DN value from the original image which is nearest to the new pixel location in the corrected image.

Resampling methods

New DN values are assigned in 3 ways


a.Nearest Neighbour Pixel in new grid gets the value of closest pixel from old grid retains original DNs
b. Bilinear Interpolation

New pixel gets a value from the weighted average of 4 (2 x 2) nearest pixels;
smoother but synthetic

c. Cubic Convolution
(smoothest) New pixel DNs are computed from weighting 16 (4 x 4) surrounding DNs

http://www.geo-informatie.nl/courses/grs20306/course/Schedule/Geometric-correction-RS-new.pdf

Image Enhancement
Visual analysis and interpretation are often sufficient for many purpose to extract information from remote sensing data. Enhancement means altering the appearance of digital image so as to make it more informative for visual interpretation. The characteristics of each image in terms of distribution of pixel values will change from one area to another.

Image transformation and filtering are also used as image enhancement techniques.

For visual enhancement Changing image contrast is one of the important exercise.
Contrast is defined as the difference in colour that makes an object (or its representation in an image) distinguishable.

The range of brightness values present in an image is also referred as contrast.

In raw imagery, the useful data often covers only a small portion of the available range of digital values (for example for 8 bits or 256 levels). Contrast enhancement involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds. For contrast enhancements concept of an image histogram is important. A histogram is a graphical representation of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of occurrence of each of each DN values in the image is shown on the y-axis.

By manipulating the range of DN values represented by the histogram of an image, various contrast enhancement techniques can be applied. The simplest type is a linear contrast stretch. This involves identifying the minimum and maximum brightness values in the image (lower and upper bounds from the histogram) and applying a transformation to stretch this range to fill the full range.

Histogram equalization Histogram is transformed so that all pixels have same frequency along the whole range. This method expands some parts of the DN range at the expense of others by dividing the histogram into classes containing equal numbers of pixels

Piece wise linear stretch- when histogram is bi-model. In this approach a number of linear enhancement steps that expands the brightness ranges by using breakpoints are used.

Density slicing-combining DNs of different values within a specified range into a single value. This transforms the image from a continuum of gray tones into a limited number of gray or color tones reflecting the specified ranges in digital numbers. This is useful in displaying weather satellite information.

Filtering
Filtering include a set of image processing functions used to enhance the appearance of an image. Filter operations can be used to sharpen or blur images, to selectively suppress image noise, to detect and enhance edges, or to alter the contrast of the image.

Two broad categories


Spatial domain filtering Frequency domain filtering

Spatial Domain Filtering


An image enhancement method that modifies pixel values based on the values of the surrounding pixels, with the objective of enhancing areas of high or low spatial frequency. The spatial frequency of a remotely sensed image is defined by the change in brightness value per unit distance in any part of the image. Rapid variations in brightness levels reflect a high spatial frequency; 'smooth' areas with little variation in brightness level or tone are characterized by a low spatial frequency.

Spatial filtering
Low pass filter -These are used to emphasize large homogenous areas of similar tone and reduce the smaller detail. Low frequency areas are retained in the image resulting in a smoother appearance to the image. Average, Median and majority filters

Original Image with a profile line

Low-Pass Filtered Image and profile line

Spatial filtering
In spatial domain filter, a moving window of a set of pixels in dimension (i.e. 3X3 and 5X5) is passed over each pixel in image. Mathematical calculation using pixel value under the window is applied and central pixel of window is replaced by this value. This window is called Convolution kernel.

Convolution Filters (ERDAS)


1. 2. 3. 4. 5. 6. Edge Detection/enhancement Low pass/High Pass Horizontal Vertical Sharpen Summary

Edge detection filters highlight linear features, such as roads or field boundaries. These filters are useful in applications such as geology, for the detection of linear geologic structures (lineament). Are used to determine boundaries of homogenous regions in radar images. Roberts and Sobel filters (High pass filters) are Mostly used.

Actual

Median

Edge detection

High pass

Frequency Domain Filter


Fourier transform of an image is expressed by amplitude spectrum which involves breaking down of an image into its frequency. Filtering of these components is done using frequency domain filters. These filters operates on amplitude spectrum of an image by removing, attenuating or amplifying the amplitudes in specific wavebands. Frequency domain can be represented as a 2D scatter plot known as Fourier spectrum. In which lower frequency falls at centre and progressively higher frequencies are plotted outwards.

Fourier spectrum of ETM+ PAN image

Converted image

After applying the circular mask

Frequency domain filtering


1. Fourier transform of the original image to compute Fourier spectrum. 2. Select an appropriate filter function and multiply it by the elements of the Fourier spectrum. 3. Perform an inverse Fourier transform to have an image in spatial domain. 4. Ideal, Bartlett, Butterworth, Gaussian and Hanning are some of the filters for frequency domain filtering.

Image Transformation
A process through which one can re-express the information content of an Image. image transformations generate new images from two or more source images which highlight particular features or properties of interest, better than the original input images.

Various methods
Arithmetic Operations Principal component analysis Hue, saturation and intensity (HSI) transformation

Fourier and wavelet transformation

Arithmetic operations
Image addition Image subtraction Image division (image ratioing) Image multiplication These operation are done on two or more coregistered images of same area. Division is most widely used operation for geological, ecological and agricultural applications.

Vegetation indices (VIs)


Image division serves to highlight subtle variations in the spectral responses of various surface covers. By ratioing the data from two different spectral bands, the resultant image enhances variations in the slopes of the spectral reflectance curves between the two different spectral ranges that may otherwise be masked by the pixel brightness variations in each of the bands. VIs are combinations of surface reflectance at two or more wavelengths designed to highlight a particular property of vegetation.

More than 150 VIs are published in scientific literature so far. Only a small subset have substantial biophysical basis or have been systematically tested. Important VIs are: Normalized Difference Vegetation Index (NDVI) Simple Ratio Index (SR) Enhanced Vegetation Index (EVI) Atmospherically Resistant Vegetation Index (ARVI) Soil adjusted vegetation Index (SAVI)

NDVI NIR-R/NIR+R SR NIR/R EVI 2.5(NIR-R)/(NIR+6R-7.5B+1) ARVI (NIR-(2R-B))/(NIR+(2R-B)) SAVI (NIR-R)(1+L)/(NIR+R+L) Where, NIR=Near Infrared, R=red, B=Blue and L=soilbrightness dependent correction factor

Principal component Analysis (PCA)


Image transformation techniques based on processing of the statistical characteristics of multi-band data sets to produce new bands. Can be used to reduce data redundancy and correlation between bands. The new bands are called components. PCA attempts to maximize (statistically) the amount of information (or variance) from the original data into the least number of new components.

Actual Image PCA1

PCA2

PCA3

HSI transform
Images are generally displayed in RGB colours (primary colours) An alternate to this is to hue, saturation and intensity system Hue refers to average wavelength of the light contributing to the colour Saturation specifies the purity of colour relative to gray Intensity relates to the total brightness of a colour. This is used for image enhancement

RGB image from HSI

HIS image of first 3 PCA file

Image Classification
A process of assigning pixels or a group of pixels in an image to one of a number of classes. The output of image classification is a thematic map. A thematic map is a map that focuses on a specific theme or subject area (like land use/cover in remote sensing) Land cover is the natural landscape recorded as surface components: forest, water, wetlands, urban, etc. Land cover can be documented by analyzing spectral signatures of satellite and aerial imagery. Land use is the way human uses the landscape: residential, commercial, agricultural, etc. Land use can be inferred but not explicitly derived from satellite and aerial imagery.

Conventional multispectral classification techniques perform class assignments based only on the spectral signatures of a classification unit. Contextual classification refers to the use of spatial, temporal, and other related information, in addition to the spectral information of a classification unit in the classification of an image. Object based classification (based on segmentation techniques) A classification unit could be a pixel, a group of neighbouring pixels or the whole image

Classification Approaches
There are three approaches to pixel labeling 1. Supervised 2. Unsupervised 3. Semi-supervised

Steps in supervised classification


Definition of Information Classes

Training/Calibration Site Selection


Locate areas of known classes on the image Generation of Statistical Parameters (if statistical classifier is used)

define the unique spectral characteristics of selected pixels


Classification assignment of unknown pixels to the appropriate information class Accuracy Assessment test/validation data for accuracy assessment Output Stage

Training pixels

Test pixels Full Image Data

Classifier Used

Principle of supervised classification

Supervised classification
Requires training areas to be defined by the analyst in order to determine the characteristics of each class.

Ground reference image of the study area

Result of supervised classification

Land Cover Map

Legend: Water Conifer Deciduous

Supervised classifiers
Maximum likelihood Neural network Decision tree Kernel based sparse classifiers

wij
j k

w jk

Input Layer

Hidden Layer

Output Layer

Back-propagation neural network classifier

BY Maximum Likelihood classifier

By Neural network classifier

By support vector machine

Accuracy
With ETM+ dataset using 7 classes Classification accuracy
Classifier Maximum likelihood Accuracy (%)

82.60 85.60 85.10 87.37

Decision tree Neural network Support vector machine

Unsupervised classifier
Unsupervised classification, searches for natural groups of pixels, called clusters, present within the data by means of assessing the relative locations of the pixels in the feature space. An algorithm is used to identify unique clusters of points in feature space, which are then assumed to represent unique land cover class. Number of clusters (i.e. classes) are defined by user. These are automated procedures and therefore require minimal user interaction.

Unsupervised classifiers
The most popular clustering algorithms used in remote sensing image classification are: 1. ISODATA, a statistical clustering method, and 2. the SOM (self organising feature maps), an unsupervised neural classification method.

ISODATA
Iterative Self-Organizing Data Analysis Technique
Parameters you must enter include: N - the maximum number of clusters that user wants (depends on his knowledge about area) T - a convergence threshold, which is the maximum percentage of the pixels whose class values are allowed to be unchanged between iterations M - the maximum number of iterations to be performed.

ISODATA Procedure
N arbitrary cluster means are established, The image is classified using a minimum distance classifier A new mean for each cluster is calculated The image is classified again using the new cluster means Another new mean for each cluster is calculated The image is classified again.

After each iteration, the algorithm calculates the percentage of pixels that remained in the same cluster between iterations When this percentage exceeds T (convergence threshold), the program stops or If the convergence threshold is never met, the program will continue for M iterations and then stop.

Semi-supervised classification
Use small number of labeled training data to label large amount of unlabeled data.
Because collection of training data is expensive

The basic assumption of most Supervised learning algorithms

Semi-

Nearby points are likely to have the same label. Similar data should have the same class label. Two points that are connected by a path going through high density regions should have the same label.

General image classification procedures


1. Selecting information classes such as urban, agriculture, forest areas, etc. Conduct field studies and collect ground information and other ancillary data of the study area. 2. Preprocessing of the image, including radiometric, atmospheric, geometric and topographic corrections, image enhancement. 3. Creating a reference image from ground survey from actual image to generate training signatures. 4. Image classification 5. Supervised mode: using training signature 6. unsupervised mode: image clustering and cluster grouping 7. Post-processing: complete geometric correction & filtering and classification decorating. 8. Accuracy assessment: compare classification results with ground truth.

Parametric and non parametric


Parametric classifiers are based upon statistical parameters (mean & standard deviation used by MLC) and based on the assumption that data are normally distributed. non-parametric methods make no assumptions about the probability distribution of the data, and are often considered robust because they may work well for a wide variety of class distributions, as long as the class signatures are reasonably distinct (NN, SVM, DT etc).

Image/Photo Interpretation
An act of examining images for the purpose of identifying objects and judging their significance. Depending upon the instruments employed for data collection one can interpret a variety of images such as aerial photographs, scanner, thermal and radar imagery. Even a digitally processed imagery requires image interpretation.

Basic principal of interpretation


Image is a pictorial representation of pattern of landscape which is composed of elementsindicators of things and events which reflect physical, biological, and cultural components of landscape. Similar conditions in similar surroundings reflect similar patterns and unlike conditions reflect unlike patterns Type and nature of extracted information depend on knowledge, skill, and experience of interpreter, method used for interpretation and understanding of its limitations.

Elements of image interpretation


Shape Size Tone Shadow Pattern Texture Association Site Time

Shape
Shape refers to the general form or outline of an individual object. Man made features have specific shapes A railways is readily distinguishable from a road or a canal as its shape consists of long straight tangents and gentle curves as opposed to curved shape of a highway.

Size
Length, width, height, area, volume of the object. It is a function of image scale.

Tone/colour
Tone of an object refers to relative brightness or colour in an image. One of the fundamental element to differentiate between different objects. It is qualitative measure No feature has a constant tone. Tone vary with the reflectivity of the object, the weather, the angle of light on an object and moisture content of the surface. The sensitivity of the response of tone to all the aforementioned variables makes it a very discriminating factor. Slight changes in the natural landscape are more easily comprehended because of tonal variations.

Shadow
A shadow provides information about the object's height, shape, and orientation (e.g. tree species);

Patterns
Similar to shape, the spatial arrangement of objects (e.g. row crops vs. pasture) is also useful to identify an object and its usage. Spatial phenomenon such structural pattern of an object in an image may be characteristic of artificial as well as natural objects such as parceling (plot of land) patterns, land use, geomorphology of tidal marshes or shallows, land reclamation, erosion gullies, tillage, plant direction ridges of sea waves, lake districts, nature terrain etc.

Texture
Frequency of tonal change in particular area of an image A qualitative characteristics and generally refers as rough or smooth. Texture involves the total sum of tone, shape pattern and size, which together give the interpreter an intuitive feeling for the landscape being analyzed.

Association
Associating the presence of one object with another, or relating it to its environment, can help identify the object (e.g. industrial buildings often have access to railway sidings; nuclear power plants are often located beside large bodies of water). For example white irregular patches adjacent to sea referees to sand.

Site
Location of an object amidst certain terrain characteristics shown by the image may exclude incorrect conclusions e.g., site of an apartment building is not acceptable in a swamp or a jungle

Time
Temporal characteristics of a series of photographs can be helpful in determining the historical change of an area (e.g. looking at a series of photos of a city taken in different years can help determine the growth of suburban neighborhoods.

Activities in image interpretation


Detection Recognition and identification Analysis Deduction Classification Selectively picking up the object of importance for the particular kind of interpretation

Idealization

Classification of an object by means of specific knowledge, within a known category, upon its detection in the image. Process of separating a set of similar objects and involves drawing boundary lines. Separation of different group of objects and deducing their significance based on converging of evidence Establishment of the identity of objects delineated by analysis Standardization of representation of what is actually seen in imagery.

Applications

Assessment of Ground water Quality for Potability


Journal of Geographic Information System, 2010, 2, 152-162 Water quality management is an important issue in the modern times. The data collected for Tiruchirappalli city have been utilized to develop the approach. Groundwater quality for potability indicated high to moderate water pollution levels at Srirangam, Ariyamangalam, Golden Rock and K. Abisekapurm zones of the study area, depending on factors such as depth to groundwater, constituents of groundwater and vulnerability of groundwater to pollution.

Groundwater vulnerability mapping


Published in Applied Geography by Barnali Dixon The overall goal of this research is to improve the methodology for the generation of contamination potential maps by using detailed landuse/pesticide and soil structure information in conjunction with selected parameters from the DRASTIC model. Other objectives are to incorporate GIS, GPS, remote sensing and the fuzzy rule-based model to generate groundwater sensitivity maps, and to compare the results of new methodologies with the modified DRASTIC Index (DI) and field water quality data.

DRASTIC, an overlay and index method developed for the Environmental Protection Agency (EPA) by the American Water Well Association (Aller et al., 1987) is a widely used model. This model assesses contamination potential of an area to pollution by combining key factors influencing the solute transport. The original DRASTIC Index (DIorg) was calculated using the most important hydrogeologic factors that affect the potential for groundwater pollution. The DRASTIC Index does not provide absolute answers; it only differentiates highly vulnerable areas from less vulnerable areas. This model does not include soil structure in the model.

The landuse data used in this study was obtained from Landsat 5 Thematic Mapper (TM) 1992. TM images from two seasons, spring and summer, were used in this study. The image was classified into 4 level I classifications, such as urban, forests, water and agriculture. Then the images were further classified for agricultural crops.

Satellite remote sensing of surface air quality

In Atmospheric Environment 42 (2008) 78237843 by Randall V. Martin Global observations are now available for a wide range of species including aerosols, tropospheric O3, tropospheric NO2, CO, HCHO, and SO2.

HCHO (Formaldehyde) columns are closely related to surface VOC ( Volatile Organic Compounds) emissions since HCHO is a high-yield intermediate product from the oxidation of reactive non methane VOCs.

Hyperspectral Remote Sensing of Water Quality Parameters for Large Rivers in the Ohio River Basin
Naseer A. Shafique, Florence Fulk, Bradley C. Autrey, Joseph Flotemersch

Compact Airborne Spectrographic Imager (CASI) data was used. In situ water samples were collected and a field spectrometer was used to collect spectral data directly from the river.

Method of 2D and 3D Air Quality monitoring using a Lidar


Published in 16th Conference on Air pollution Meteorology To characterize urban and industrial pollution in FRANCE. Lidar (Light Detection And Ranging) equipped with a scanning device, allows realizing mapping of particles. In industrials sites for plumes detection, urban site to show pollution from traffic and also in a tunnel with big circulation. During this experiment a LIDAR , works at 355 nm and have a spatial resolution of 1.5m is used. It is equipped with a cross- polarised channel which discriminate non-spherical particles from the others. For these measurements the lidar was placed at a horizontal position. The lidar signal is inverted using the so-called slope method. From this calculation we retrieve the backscatter profile and then calculate an extinction value along the optical path, and detect the plumes very accurately.

Horizontal scanning from Lyon near a tunnel. We can observe a huge concentration of particles at the intersection of many roads.

Mapping of heavy metal pollution in stream sediments using combined geochemistry, field spectroscopy, and hyperspectral remote sensing

The aim of this study is to derive parameters from spectral variations associated with heavy metals in soil and to explore the possibility of extending the use of these parameters to hyperspectral images and to map the distribution of areas affected by heavy metals on HyMAP data. Variations in the spectral absorption features of lattice OH and oxygen on the mineral surface due to the combination of different heavy metals were linked to actual concentrations of heavy metals.

Location map and HyMAP image of the Rodalquilar area, SE Spain. (a) Locations of sampling points along the studied main stream, showing the five sections. (b) HyMAP image acquired in 2004.

Soil Mapping

Soil survey and mapping using remote sensing, Tropical Ecology 43(1): 61-74, 2002 M.L.MANCHANDA, M.KUDRAT & A.K.TIWARI

Impact of industrialization on forest and non-forest Assessing impact of industrialization in terms of LULC in a dry tropical region (Chhattisgarh), India using remote sensing data and GIS over a period of 30 years Environ Monit Assess (2009) 149:371376

Multi-sensor data fusion for the detection of underground coal fires, X.M. Zhang, C.J.S. Cassells & J.L. van Genderen, Geologie en Mijnbouw 77: 117127, 1999.

http://www.fas.org/irp/imint/docs/rst/Front/tofc.html For large number of applications of remote sensing data

www.cof.orst.edu/cof/teach/...powerpoint.../imageclassification.ppt http://www.nrcan.gc.ca/earth-sciences/geography-boundary/remotesensing/fundamentals/ http://nature.berkeley.edu/~gong/textbook/ rst.gsfc.nasa.gov/ http://nptel.iitm.ac.in/courses/Webcourse-contents/IITKANPUR/ModernSurveyingTech/ui/TOC1.htm http://www.ccrs.nrcan.gc.ca/glossary/ http://maic.jmu.edu/sic/rs/resolution.htm http://www.gis.unbc.ca/courses/geog432/lectures/lect7/index.php

S-ar putea să vă placă și