Sunteți pe pagina 1din 24

Contrast (vision)

From Wikipedia, the free encyclopedia Jump to: navigation, search

Left side of the image has low contrast, the right has higher contrast.

Changes in the amount of contrast in a photo Contrast is the difference in luminance and/or color that makes an object (or its representation in an image or display) distinguishable. In visual perception of the real world, contrast is determined by the difference in the color and brightness of the object and other objects within the same field of view. Because the human visual system is more sensitive to contrast than absolute luminance, we can perceive the world similarly regardless of the huge changes in illumination over the day or from place to place. The maximum contrast of an image is the contrast ratio or dynamic range. Contrast is also the difference between the color or shading of the printed material on a document and the background on which it is printed, for example in optical character recognition.


1 Biological contrast sensitivity

2 Formula
o o o

2.1 Weber contrast 2.2 Michelson contrast 2.3 RMS contrast

3 Contrast sensitivity
o o

3.1 Contrast sensitivity and visual acuity 3.2 Improving contrast sensitivity

4 See also 5 References 6 External links

Biological contrast sensitivity

The human contrast sensitivity function shows a typical band-pass filter shape peaking at around 4 cycles per degree with sensitivity dropping off either side of the peak.[1] This tells us that the human visual system is most sensitive in detecting contrast differences occurring at 4 cycles per degree, i.e. at this spatial frequency humans can detect lower contrast differences than at any other spatial frequency. The high-frequency cut-off represents the optical limitations of the visual system's ability to resolve detail and is typically about 60 cycles per degree. The high-frequency cut-off is related to the packing density of the retinal photoreceptor cells: a finer matrix can resolve finer gratings. The low frequency drop-off is due to lateral inhibition within the retinal ganglion cells. A typical retinal ganglion cell presents a centre region with either excitation or inhibition and a surround region with the opposite sign. By using coarse gratings, the bright bands fall on the inhibitory as well as the excitatory region of the ganglion cell resulting in lateral inhibition and account for the low-frequency drop-off of the human contrast sensitivity function. One experimental phenomenon is the inhibition of blue in the periphery if blue light is displayed against white, leading to a yellow surrounding. The yellow is derived from the inhibition of blue on the surroundings by the center. Since white minus blue is red and green, this mixes to become yellow.[2] For example, in the case of graphical computer displays, contrast depends on the properties of the picture source or file and the properties of the computer display, including its variable settings. For some screens the angle between the screen surface and the observer's line of sight is also important.


An image of the Notre Dame cathedral as seen from the Eiffel Tower

The same image, with added global contrast, and local contrast (acutance) increased through unsharp masking. There are many possible definitions of contrast. Some include color; others do not. Travnikova laments, "Such a multiplicity of notions of contrast is extremely inconvenient. It complicates the solution of many applied problems and makes it difficult to compare the results published by different authors."[3] Various definitions of contrast are used in different situations. Here, luminance contrast is used as an example, but the formulas can also be applied to other physical quantities. In many cases, the definitions of contrast represent a ratio of the type

The rationale behind this is that a small difference is negligible if the average luminance is high, while the same small difference matters if the average luminance is low (see WeberFechner law). Below, some common definitions are given.

Weber contrast
The Weber contrast is defined as

with and representing the luminance of the features and the background luminance, respectively. It is commonly used in cases where small features are present on a large uniform background, i.e. the average luminance is approximately equal to the background luminance.

Michelson contrast
The Michelson contrast[4] (also known as the Visibility) is commonly used for patterns where both bright and dark features are equivalent and take up similar fractions of the area. The Michelson contrast is defined as

with and representing the highest and lowest luminance. The denominator represents twice the average of the luminance.[5]

RMS contrast
Root mean square (RMS) contrast does not depend on the spatial frequency content or the spatial distribution of contrast in the image. RMS contrast is defined as the standard deviation of the pixel intensities:[6]

where intensities are the -th -th element of the two dimensional image of size by . is the average intensity of all pixel values in the image. The image is assumed to have its pixel intensities normalized in the range .

Contrast sensitivity
Contrast sensitivity is a measure of the ability to discern between luminances of different levels in a static image. Contrast sensitivity varies between individuals, reaching a maximum at approximately 20 years of age, and at spatial frequencies of about 25 cycles/degree. In addition it can decline with age and also due to other factors such as cataracts and diabetic retinopathy.[7]

In this image, the contrast amplitude depends only on the vertical coordinate, while the spatial frequency depends on the horizontal coordinate. Observe that for medium frequency you need less contrast than for high or low frequency to detect the sinusoidal fluctuation.

Contrast sensitivity and visual acuity

Visual acuity is a parameter that is frequently used to assess overall vision. However, diminished contrast sensitivity may cause decreased visual function in spite of normal visual acuity.[8] For example, some individuals with glaucoma may achieve 20/20 vision on acuity exams, yet struggle with activities of daily living, such as driving at night. As mentioned above, contrast sensitivity describes the ability of the visual system to distinguish bright and dim components of a static image. Visual acuity can be defined as the angle with which one can resolve two points as being separate, given that the image is shown with 100% contrast and is projected onto the fovea of the retina.[9] Thus, when an optometrist or ophthalmologist assesses a patients visual acuity using a Snellen chart, the target image is displayed at high contrast (e.g. black letters on a white background). A subsequent contrast

sensitivity exam may demonstrate difficulty with decreased contrast (e.g. grey letters on a white background). To assess a patients contrast sensitivity, one of several diagnostic exams may be used. Most charts in an ophthalmologists office will show images of varying contrast and spatial frequency. Parallel bars of varying width and contrast, known as sine-wave gratings, are sequentially viewed by the patient. The width of the bars and their distance apart represent spatial frequency, measured in cycles per degree. Studies have demonstrated that medium-level spatial frequency, approximately 5-7 cycles per degree, is optimally detected by most individuals, compared with low- or high-level spatial frequencies.[10] The contrast threshold can be defined as the minimum contrast that can be resolved by the patient. The contrast sensitivity is equal to 1/contrast threshold. Using the results of a contrast sensitivity exam, a contrast sensitivity curve can be plotted, with spatial frequency on the horizontal axis and contrast threshold on the vertical axis. Also known as contrast sensitivity function (CSF), the plot demonstrates the normal range of contrast sensitivity, and will indicate diminished contrast sensitivity in patients who fall below the normal curve. Some graphs contain contrast sensitivity acuity equivalents, with lower acuity values falling in the area under the curve. In patients with normal visual acuity and concomitant reduced contrast sensitivity, the area under the curve serves as a graphical representation of the visual deficit. It is because of this impairment in contrast sensitivity that patients have difficulty driving at night, climbing stairs and other activities of daily living in which contrast is reduced.[11]

The graph demonstrates the relationship between contrast sensitivity and spatial frequency. The target-like images are representative of center-surround organization of neurons, with peripheral inhibition at low, intermediate and high spatial frequencies. Used with permission from Brian Wandell, PhD. Recent studies have demonstrated that intermediate-frequency sinusoidal patterns are optimallydetected by the retina due to the center-surround arrangement of neuronal receptive fields.[12] In an intermediate spatial frequency, the peak (brighter bars) of the pattern is detected by the center of the receptive field, while the troughs (darker bars) are detected by the inhibitory periphery of the receptive field. For this reason, low- and high-spatial frequencies elicit excitatory and inhibitory impulses by overlapping frequency peaks and troughs in the center and periphery of the neuronal receptive field.[13] Other environmental,[14] physiologic and anatomical factors influence the neuronal transmission of sinusoidal patterns, including adaptation.[15]

Decreased contrast sensitivity arises from multiple etiologies, including retinal disorders such as Age-Related Macular Degeneration (ARMD), lens abnormalities, such as cataract, and by higher-order dysfunction, including stroke and Alzheimers disease.[16] In light of the multitude of etiologies leading to decreased contrast sensitivity, contrast sensitivity tests are useful in the characterization and monitoring of dysfunction, and less helpful in detection of disease.

Improving contrast sensitivity

It was once thought that contrast sensitivity was relatively fixed and could only get worse with age. However new research has shown that playing videogames can slightly improve contrast sensitivity.[17]

See also

Acutance Radiocontrast Contrast ratio

1. ^ Campbell, F. W. & Robson, J. G. (1968). "Application of Fourier analysis to the visibility of gratings". Journal of Physiology 197 (3): 551566. PMC 1351748. PMID 5666169. 2. ^ "eye, human."Encyclopdia Britannica. 2008. Encyclopdia Britannica 2006 Ultimate Reference Suite DVD 3. 4. 5. 6. 7. 8. ^ Travnikova, N. P. (1985). Efficiency of Visual Search. p.4. Mashinostroyeniye. ^ Michelson, A. (1927). Studies in Optics. U. of Chicago Press. ^ ^ E. Peli (Oct. 1990). "Contrast in Complex Images". Journal of the Optical Society of America A 7 (10): 20322040. doi:10.1364/JOSAA.7.002032. ^ Peter Wenderoth. "The Contrast Sensitivity Function". ^ Hashemi H, Khabazkhoob M, Jafarzadehpur E, Emamian MH, Shariati M, Fotouhi A. Contrast sensitivity evaluation in a population-based study in Shahroud, Iran. Ophthalmology. 2012 Mar;119(3):541-6. ^ Sadun, AA. Optics lecture on 03/06/2013. University of Southern California. ^ Leguire LE, Algaze A, Kashou NH, Lewis J, Rogers GL, Roberts C. Relationship among fMRI, contrast sensitivity and visual acuity. Brain Res. 2011 Jan 7;1367:162-9.

9. 10.


^ Sia DI, Martin S, Wittert G, Casson RJ. Age-related change in contrast sensitivity among Australian male adults: Florey Adult Male Ageing Study. Acta Ophthalmol. 2012 Mar 16. ^ Wandell, B.A. Foundations of Vision. Chapter 5: The Retinal Representation. 1995. Sinauer Associates, Inc. Accessed at on 03/23/2013. ^ Tsui JM, Pack CC. Contrast sensitivity of MT receptive field centers and surrounds. J Neurophysiol. 2011 Oct;106(4):1888-900. ^ Jarvis JR, Wathes CM. Mechanistic modeling of vertebrate spatial contrast sensitivity and acuity at low luminance. Vis Neurosci. 2012 May;29(3):169-81. ^ Cravo AM, Rohenkohl G, Wyart V, Nobre AC. Temporal expectation enhances contrast sensitivity by phase entrainment of low-frequency oscillations in visual cortex. J Neurosci. 2013 Feb 27;33(9):4002-10. ^ Risacher SL, Wudunn D, Pepin SM, MaGee TR, McDonald BC, Flashman LA, Wishart HA, Pixley HS, Rabin LA, Par N, Englert JJ, Schwartz E, Curtain JR, West JD, O'Neill DP, Santulli RB, Newman RW, Saykin AJ. Visual contrast sensitivity in Alzheimer's disease, mild cognitive impairment, and older adults with cognitive complaints. Neurobiol Aging. 2013 Apr;34(4):1133-44. ^ "Contrast sensitivity improvement".


13. 14. 15.



Signal Processing beta Questions Tags

Tour Users Ask Question

Tell me more Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It's 100% free, no registration required.

Difference between Contrast and Intensity of an Image

I wanted to generate Gray Scale wedge image of 10 Levels in MATLAB and then increase and decrease its Intensity. By high intensity, I mean mapping increasing the level of gray scale intensity in an image from lower value to higher values. up vote 3 dow n vote favo rite The examples I have tried were performed on a Gray Scale wedge image of 10 levels:
0 28 57 85 113 142 170 198 227 255

I was using the function imadjust. For increasing the intensity levels I used:
imadjust(grayStepImage, [.35 .75], [0.7 0.8])

The output:
179 179 179 179 184 192 199 204 204 204

and the result in image form:

Similarly, for lowering the intensity I used

imadjust(grayStepImage, [.35 .75], [0.1 0.39])

Original and modified levels are:

0 26 28 26 57 26 85 26 113 43 142 64 170 84 198 99 227 99 255 99

and the result in image form:

Was I actually increasing/lowering intensity of the image? How do those operations relate to the case of increasing and decreasing the contrast of an image? What I know about contrast is, Contrast is defined as the separation between the darkest and brightest areas of the image. Increase contrast and you increase the separation between dark and bright, making shadows darker and highlights brighter. Does that mean, if I want to increase the contrast of an image I should increase the high intensity levels of an image to much higher and lower intensity level to much lower? I am confused with contrast and the intensity as they seem opposite to each other. Kindly help me out. image-processing matlab edited Oct 25 '12 at 10:20 asked Oct 6 '12 at 0:32 shareimprove this question

penelope Effected 1,398427 542322 There are many definitions of contrast. I think the simplest contrast adjustment is just a multiplication by a contrast factor c: x=255[(x/2550.5)c+0.5]. Libor Oct 6 '12 at 10:49 More elaborate methods take also pixel neighborhood into account. These include adaptive histogram equalization and processing in contrast domain. Libor Oct 6 '12 at 10:56 The documentation says that - "Note If high_out < low_out, the output image is reversed, as in a photographic negative." You have 0.39 < 0.1, that causes the wierd effect in second call. Andrey Oct 6 '12 at 11:01

4 Answers
active oldest votes up vote On intensity 1 down vote Simply said, it's hard to talk about "intensity" of an image. Every pixel has its intensity accepted (for greyscale images, they are usual allowed range is [0, 255]), but the concept of +50 image intensity does not exist. If you are doing some kind of image analysis, you could be interested in a parameter describing image intensities, e.g. mean intensity (like @geometrikal said) or distribution of image levels (which is related to contrast). On contrast If you presume an idealized situation where a greyscale image contains only pixels of two intensity values, one for background pixels and one for object pixels, the contrast of the object would be the difference of those values. What it means on the displayed image --> the higher the contrast, the easier it is to spot (find, locate) the object on the image (for the human eye). Again, you would usually talk about contrast of an object in an image, it's hard to talk about contrast of an image where it is not possible to define an object. As an example, let's look at several 1-D signals (you can produce images similar to your examples from these):
1) 2) 0 0 0 0 0 0 0 20 20 0 0 0 0

0 255 255

3) 200 200 200 200 255 255 200 200 4) 200 200 200 200 0 0 200 200

I'll assume that the two pixels different from the rest in each example are object pixels, and the rest are background. We have several cases here (and I could make more):

1. brighter object on dark background, all intensity values are low (i.e. the image is rather dark), contrast is low 2. bright object on dark background, most pixels have low intensity, contrast is high (it's a black-white image) 3. bright object on darker background, all intensity values are high (i.e. the image is bright), contrast is low (but higher then in the 1st example) 4. dark object on bright background, most pixels have high intensity, contrast is high (but not as high as in the 2nd example) Here's another example supporting the fact that the contrast is usually an attribute describing an object, either in relation to itself or to the surrounding. Another important fact to notice from this example is that contrast generally describes the difference between the intensity levels, but the precise definition depends on the application (purpose for the measurement). Let me paraphrase the contrast definition used in hierarchical image segmentation from P. Soille, L. Najman: On morphological hierarchical representations for image processing and spatial data clustering as an example: The internal contrast of a connected component (object) corresponds to the largest intensity difference between two adjacent pixels belonging to this connected component. The external contrast is defined as the smallest intensity difference between a pixel of the considered connected component and an adjacent pixel not belonging to the considered component. Of course, a different application could use a different measurement for contrast. On imadjust I don't usually work in MATLAB, but from the documentation, it is used to map intensity range (intensity levels in given range) of an input image, to an intensity range specified for the output image. The default, no-parameter call, will increase contrast of the image. But, you can get a brighter/darker image with imadjust with either increasing or decreasing the object contrast. Let me demonstrate this on my 3rd example:
imadjust(I3, [0.8; 1.0], [0.0; 1.0])

should output 0 0 0 0 255 255 0 0. You would get a white object on black background. In general, the image is darker (lower intensity), but contrast is higher (black-white).
imadjust(I3, [0.8; 1.0], [0.0; 0.2])

should output 0 0 0 0 51 51 0 0. Fairly dark object on a black background. The

image is darker (lower intensity) and the contrast of the object did not change.
imadjust(I3, [0.8; 1.0], [0.85; 0.9])

should output 216 216 216 229 229 216 216. Very bright object on bright background. The image is brighter (higher intensity), and the contrast of the object decreased (object intensity levels are more similar to the intensity levels of the background). You get the actual ranges the function is working with by multiplying them with the maximum gray level (255). For example, a range [0.2; 0.8] is actually intensity range [51; 204]. One more thing to take care of is that the function clips the values outside the first intensity range, and maps them to the new low if they're smaller or new high if they're larger then the range. All of my examples actually include this: the first range starts from 0.8 which maps to intensity 204, but the intensity of 200 from the input image is mapped to the output low in all the images. So, it's actually just a simple scaling of image intensities (with cut-offs). Also, the default call to imadjust with only an image as an input parameter should increase contrast. I'd say that imadjust(I2) wouldn't do anything (there's a maximum contrast in my second example). On contrast enhancement Quote from P. Soille: Morphological Image Analysis: Image contrast enhancement refers to accentuation or sharpening of image features so as to make a graphic display more useful for visualization or analysis of the image by the human eye. He also emphasizes: with enhanced contrast image analysis by visual (human) inspection is easier. In idealized images like in my examples, a computer wouldn't gain much in term of image analysis difficulty. E.g. for object extraction by thresholding, it would just mean different thresholds should be used to extract the object from the background. But, a human examiner would spot the object much easier on high-contrast images. This all changes some in real images (objects don't have uniform gray levels), so contrast enhancement becomes useful in image analysis. There are several methods of contrast enhancement:

point-based techniques, where the local neighborhood is not important. They are

based on the analysis of grey levels through the whole image neighborhood based techniques, where local neighborhood of the pixel is important. Examples are white and black top-hat operators, and toogle contrast operator. transform based techniques, where the filtering is done on a transformed image before using the inverse transform (e.g. filtering in the frequency domain after Fourier transform).

On intensity adjustment In term of greyscale image, the intensity of the pixel corresponds to it's brightness. The greater the intensity, the greater the brightness. This also means that increasing intensity can be viewed as brightening the image (while decreasing intensity can be viewed as darkening the image). I would describe the process of uniformly brightening the image as increasing intensity while leaving the contrast unchanged in the whole image. This actually means adding a constant value to all the pixels. Now, as the pixel intensity values have a predefined range, typically [0..255], this means that the maximum intensity exists. This inevitably means that some pixels (that were different from each other before) will become white (i.e. intensity 255). This "naturally" happens when you take a photo aimed towards something very bright -sun or other light. Away from the light, you might see some details, but at the position of the light/sun and around it, you will get only white pixels meaning that the intensity (amount of light when the image was taken) "hit" its maximum (displayable/storable) value. The only way you wouldn't lose details by this operation would be if the original pixel intensities belong to only a part of possible intensity range, e.g. if original pixels are in the range [10, 150], the image could be brightened by up to 105 intensity levels before you start to loose details. As imgadjust is meant to preform intensity scaling, it can do much more than just brighten/darken image. If you wanted to emulate brightening effect with imadjust, you could write something like:
imadjust(I, [0.0; x], [1.0-x; 1.0])

where x is any number between 0 and 1 (e.g. x=0.5 would brighten a [0..255] image by 127). That said, this is an overkill for such a simple operation. I'm sure matlab has a elementary operation that adds a scalar to all elements of a matrix, so you could just use

that :) answered Oct 24 '12 at 8:33 shareimprove this answer edited Nov 5 '12 at 13:43

penelope 1,398427 so sir does that mean that my methods that i performed and mentioned in the question for lowering and increasing the intensities are wrong ? Effected Oct 24 '12 at 15:38 @Effected I'm a girl, not a sir :) Sorry, I've made a mistake in my answer and corrected it. As for what you did in the question, e.g. first image, params [.35 . 75], [0.7 0.8]. All intensities from the input img lower or equal to 89=0.35*255 3 will map to 179=0.7*255, all intensities hihger than or equal to 191=0.75*255 will map to 204=0.8*255, while the intensity values in the range [89,204] will (linearly) map to the range [179,204]. Ask if you have any further questions penelope Oct 24 '12 at 18:13 @Effected Also, as I said, it's hard to talk about "image intensity". You can talk about "pixel intensity", but as a global measure, nothing really describes the whole image intensity. E.g. if you have pixel intensities in range [0, 127] in input img and you map them to [128, 255] (params: [0, 0.5], [0.5, 1]) you've increased avg 1 intensity, but contrast stayed the same. If you map the same levels to [0, 255] ([0, 0.5], [0,1]) you've increased avg intensity and contrast. If you map them to, say, [204, 255] ([0, 0.5], [0.8,1]) -- higher avg intensity, lower contrast. penelope Oct 24 '12 at 18:26 +1 Fantastic answer! Phonon Nov 5 '12 at 23:30 Somebody downvoted my answer yesterday... whoever it is (I hope you see it): could I maybe get an explanation in the comments? I'd like to update or correct my answer if there's something wrong in it. penelope Nov 8 '12 at 9:11 An intuitive explanation Imagine that you are outside, at daytime, and there is a heavy fog. The Intensity is high, up vote because the sun shines. But you cannot see anything because of the fog. The Contrast is 4 down low. All of the rays of light that reach you, have almost the same energy amount, due to vote the fog. You cannot decipher the details because your eye has some quantization of gray levels, and they look almost alike to you.

Now, you are outside at night-time, there is no fog, and the moon shines. The Intensity is low, because there is no direct radiation from the sun. The Contrast is high, but you fail to see the objects (expect the moon) clearly, because of the low total Intensity. Now you cannot decipher the details because your eye is not sensitive enough for that amount of energy.

Another, more mathematical way to think about it: Consider the two following Gaussian functions, each has mean and standard deviation. Let's assume that they represent histograms of images.

Intensity is the mean value, Contrast is the standard deviation. In the image above, the red distribution has more intensity - its center is located more to the right. The blue distribution has more contrast, it is wider.
I am confused with contrast and the intensity as they seem opposite to each other

They are not opposite, the are orthogonal. There can be 4 possibilities:

High Intensity, High Contrast - Example : Sunny day High Intensity, Low Contrast - Example : Sunny day with fog Low Intensity, High Contrast - Example : A moon in the night Low Intensity, Low Contrast - Example : A dark room answered Oct 23 '12 at 14:46

shareimprove this answer

edited Oct 23 '12 at 20:52

up vote 2 The intensity of an image could refer to a global measure of that image, such as mean pixel down intensity. A relative measure of image intensity could be how bright (mean pixel intensity) vote the image appears compared to another image. Intensity of an image could also be how

Andrey 1,241415 So does that mean, if the intensity is high there will be less darker values of pixels ? Effected Oct 23 '12 at 18:56 @Effected, if the intensity is high, the average value of the pixels will be higher. 1 Please see the updated answer. Andrey Oct 23 '12 at 20:35 The value of a pixel is its intensity.

bright the image is compared to how bright the display is capable of producing. I would define an image that has high contrast as one where the distribution of the pixel intensities is skewed towards both the low intensity (e.g. 0) and high intensity (e.g. 255) extremes of the intensity range. The imadjust command does this. e.g. Lena, mean intensity 0.48 - (0,1):

Lena histogram:

Lena after imadjust(I,[0.15 0.85],[0 1]), mean intensity 0.48 - (0,1):

Histogram after adjust:

answered Oct 6 '12 at 15:15 shareimprove this answer geometrikal 1,053110 Intensity refers to the amount of light. For grayscale images, it's depicted by the grey level value at each pixel (e.g., 127 is darker than 220 and brighter than 055 for 8-bits coded images). up Contrast refers to differences between bright and dark parts. If you only look at a vote 2 neighborhood around a pixel it can be called micro-contrast or local contrast. down vote Mathematically, any non-decreasing function of true input grey levels is a valid contrast change. For most applications however you can consider tuning the contrast by multiplying the image by some constant (< 1 if you want to lower the contrast, > 1 otherwise) and adjusting the intensity of your image by adding a constant offset.

ccd Colour signals are the light spectra either from the source or from the interaction between the illuminations spectra and response properties of materials. CCD colour camera can be described as a filter which transforms continuous colour signals from the limited spectral area to three descriptors (red, green, and blue) values of a limited range. In this sense, colour cameras resembles the human eye; they cannot directly measure the spectra of colour signals because the spectral accuracy is sacrificed for the spatial resolution (Fortner & Meyer 1997). Since the spectral data for a point is described with three values, it is only an approximation of the true, incoming colour signal spectra. Also because of this spectral data compression, colour samples with different reflectances can become metameric, which means, for example, that they appear as two different colours under a certain illumination whereas under a second illumination they cannot be discriminated (Wyszecki & Stiles 2000). According to Fortner and Meyer (1997), there are four reasons why the human eye has only three different cones: 1) there are a limited number of available visual pigments, 2) the increasing number of different cones decreases the light sensitivity of the visual system because a photon can be detected only once, 3) cones need space; and if more different cones are required to form a point, the area needed for seeing a point increases and therefore reduces resolution, and 4) more different cones would mean increasing already the enormous information flow to brain. Cameras are usually monochromatic or colour. There do exist imaging spectrographs to capture more accurately spectral data, but for them, the image forming takes a much longer time due to decreased light sensitivity. This makes them unsuitable for real-time operations and susceptible to environmental changes. Only colour cameras are considered in this thesis. It is important to note that sensor sensitivities vary between colour cameras which makes the descriptors camera dependent. In addition, there are two types of CCD colour cameras: 1CCD and 3CCD colour cameras, depending on the number of CCD elements. The 3CCD cameras have separate CCD detectors for each colour channel, whereas in 1CCD cameras the colours for the output channels are approximated using filters covering the detector. The filters have either stripe or mosaic layout over the detector and they can produce directly the RGB signals or other colours like cyan, yellow, magenta or white (no colour filter) (Holst 1998). These signals are interpolated to produce the three output colour channels and in the case of filters other than RGB, the channels are converted to RGB colour space. An image taken by a 1CCD camera has poorer spatial resolution and colour reproduction quality than the one taken with a 3CCD camera because of the colour interpolation in 1CCD cameras (Klette et al. 1998). 1CCD cameras are susceptible to colour Moire effects which cause colour deviation. On the other hand, the 3CCD cameras are more expensive and need more intense light. Although in the modelling of colour image formation the main factors are illumination spectral power distribution (SPD), spectral sensitivities of the camera, and surface reflectances, there are many other factors which can have an essential effect: scene and acquisition geometry, surroundings, camera settings, camera type and other nonidealities of the camera. The output of the colour camera is often digitized RGB (Red, Green and Blue).