Sunteți pe pagina 1din 16

Review

Colour
measurements by
computer vision for
food quality
control e A review
Di Wu and Da-Wen Sun
*
Food Refrigeration and Computerised Food
Technology (FRCFT), School of Biosystems
Engineering, University College Dublin, National
University of Ireland, Agriculture & Food Science
Centre, Beleld, Dublin 4, Ireland
(Tel.: D353 1 7167342; fax: D353 1 7167493;
e-mail: dawen.sun@ucd.ie; URLs: http://www.ucd.ie/
refrig, http://www.ucd.ie/sun)
Colour is the rst quality attribute of food evaluated by con-
sumers, and is therefore an important component of food qual-
ity relevant to market acceptance. Rapid and objective
measurement of food colour is required in quality control for
the commercial grading of products. Computer vision is
a promising technique currently investigated for food colour
measurement, especially with the ability of providing a de-
tailed characterization of colour uniformity at pixel-based
level. This paper reviews the fundamentals and applications
of computer vision for food colour measurement. Introduction
of colour space and traditional colour measurements is also
given. At last, advantages and disadvantages of computer vi-
sion for colour measurement are analyzed and its future trends
are proposed.
Introduction
Colour is a mental perceptual response to the visible spec-
trum of light (distribution of light power versus wave-
length) reected or emitted from an object. Such response
signal is interacted in the eye with the retina, and is then
transmitted to the brain by the optical nerve, which makes
human assign colours to this signal. Therefore, colour is not
an intrinsic property of the object, since if the light source
is changed, the colour of the object also changes
(Melendez-Martinez, Vicario, & Heredia, 2005). The per-
ception of colour is a very complex phenomenon that de-
pends on the composition of the object in its illumination
environment, the characteristics of the perceiving eye and
brain, and the angles of illumination and viewing.
In foods, the appearance is a primary criterion in making
purchasing decisions (Kays, 1991). Appearance is utilized
throughout the productionestorageemarketingeutilization
chain as the primary means of judging the quality of indi-
vidual units of product (Kays, 1999). The appearance of
unities of products is evaluated by considering their size,
shape, form, colour, freshness condition and nally the ab-
sence of visual defects (Costa et al., 2011). Especially, col-
our is an important sensorial attribute to provide the basic
quality information for human perception, and has close as-
sociation with quality factors such as freshness, maturity,
variety and desirability, and food safety, and therefore is
an important grading factor for most food products
(McCaig, 2002). Colour is used as an indicator of quality
in many applications (Blasco, Aleixos, & Molto, 2003;
Cubero, Aleixos, Molto, Gomez-Sanchis, & Blasco, 2011;
Quevedo, Aguilera, & Pedreschi, 2010; Rocha & Morais,
2003). Upon the rst visual assessment of product quality,
colour is critical (Kays, 1999). Consumer rst judge a food
from its colour and then from other attributes such as taste
and aroma. The colour of food products affects the con-
sumer acceptability of food products, and therefore should
be right, when consumers are purchasing foods. The
research on the objective assessment of food colours is an
expanding eld. Some researches show that colours have
relationship with human responses (Iqbal, Valous,
Mendoza, Sun, & Allen, 2010; Pallottino et al., 2010).
With increased requirements for quality by consumers,
the food industry has paid numerous efforts to measure
and control the colour of their products. Therefore, it is crit-
ical to develop effective colour inspection systems to mea-
sure the colour information of food product rapidly and
objectively during processing operations and storage
periods. For a modern food plant, as its food throughput
is increasing as well as the quality tolerance is tightening,
the employment of automatic methods for colour measure-
ment and control is quite necessary. * Corresponding author.
0924-2244/$ - see front matter 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.tifs.2012.08.004
Trends in Food Science & Technology 29 (2013) 5e20
Colour spaces
Human eye distinguishes colours according to the vary-
ing sensitivity of different cone cells in the retina to light of
different wavelengths. There are three types of colour
photoreceptor cells (cones) for human with sensitivity
peaks in short (bluish, 420e440 nm), middle (greenish,
530e540 nm), and long (reddish, 560e580 nm) wave-
lengths (Hunt, 1995). A colour sensation no matter how
complex can be described using three colour components
by eyes. These components, which are called as tristimulus
values, are yielded by the three types of cones based on the
extent to which each is stimulated. Colour space is a math-
ematical representation for associating tristimulus values
with each colour. Generally there are three types of colour
spaces, namely hardware-orientated space, human-
orientated space, and instrumental space. Some colour
spaces are formulated to help humans select colours and
others are formulated to ease data processing in machines
(Pascale, 2003). 3D demonstration of some colour space
images generated by the free software RGBCube (http://
www.couleur.org/index.php?pagergbcube) except the
HSV space (Mathworks, 2012) is illustrated in Fig. 1.
Hardware-orientated spaces
Hardware-orientated spaces are proposed for the hard-
ware processing, such as image acquisition, storage, and
display. They can sense even a very small amount of colour
variation and are therefore popular in evaluating colour
changes of food products during processing, such as the ef-
fects of changes of temperature and time during the storage
on tomato colour (Lana, Tijskens, & van Kooten, 2005). As
the most popular hardware-orientated space, RGB (red,
green, blue) space is dened by coordinates on three
axes, i.e., red, green, and blue. It is the way in which cam-
eras sense natural scenes and display phosphors work
(Russ, 1999). YIQ (luminance, in-phase, quadrature) and
CMYK (cyan, magenta, yellow, black) are another two
popular hardware-orientated spaces, which are mainly
Fig. 1. 3D demonstration of some colour space images. (a) RGB, (b) YIQ, (c) CMY, (d) HSV (Mathworks, 2012), (e) XYZ, (f) L*a*b*, (g) L*u*v*.
6 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
used for television transmission and in printing and copying
output, respectively, and hence are not used for colour mea-
surement in the food industry.
Human-orientated spaces
Human-orientated spaces are corresponding to the con-
cepts of tint, shade, and tone, which are dened by an artist
based on the intuitive colour characteristics. In general,
human-orientated spaces are hue-saturation (HS) based
spaces, such as HSI (hue, saturation, intensity), HSV
(hue, saturation, value), HSL (hue, saturation, lightness),
and HSB (hue, saturation, brightness). Hue is dened as
the attribute of a visual sensation according to which an
area appears to be similar to one of the perceived colours:
red, yellow, green, and blue, or to a combination of two of
them. Saturation is dened as the colourfulness of an area
judged in proportion to its brightness. On the other hand,
brightness is dened as the attribute of a visual sensation
according to which an area appears to emit and lightness
is dened as the brightness of an area judged relative to
the brightness of a similarly illuminated area that appears
to be white or highly transmitting (Fairchild, 2005). Differ-
ent from RGB space which uses the cuboidal coordinate to
dene colour, the colour in HS based spaces is dened us-
ing the cylindrical coordinates (Fig. 1d). Because HS based
spaces are developed based on the concept of visual percep-
tion in human eyes, their colour measurements are user-
friendly and have a better relationship to the visual signif-
icance of food surfaces. This has been claried by a study
in which HSV space had a better performance than RGB
space in the evaluation of acceptance of pizza toppings
(Du & Sun, 2005). However, human-orientated spaces, as
with human vision, are not sensitive to a small variation
in colour, and therefore are not suitable for evaluating
changes of product colour during processing.
Instrumental spaces
Instrumental spaces are used for colour instruments.
Many of instrumental spaces are standardized by the
Commission Internationale dEclairage (CIE) under a series
of standard conditions (illuminants, observers, and method-
ology spectra) (Rossel, Minasny, Roudier, & McBratney,
2006). Not like hardware-orientated spaces which have dif-
ferent coordinates for the same colour for various output
media, colour coordinates from an instrumental space are
the same on all output media. CIE XYZ colour space is
an early mathematically dened colour space created by
CIE in 1931 based on the physiological perception of light.
In XYZ space, a set of three colour-matching functions, col-
lectively called the Standard Observer, are related to the
red, green and blue cones in the eye (The Science of
Color, 1973). XYZ colour space was proposed to solve
the problem that it is not possible to stimulate one type
of cone only and no component is used to describe the per-
ceived brightness (Hunt, 1998). In this space, Y means the
lightness, while X and Z are two primary virtual
components which look like red and blue sensitive curve
of cones. However, XYZ does not represent colour gradation
in a uniform matter. For this reason, two colour spaces, CIE
1976 (L*a*b*) or called CIELAB and CIE 1976 (L*u*v*)
or called CIELUV, which are the non-linear transformation
of XYZ, were brought out and are adopted in many colour
measuring instruments. In the colour measurement of
food, L*a*b* colour space is the most used one due to
the uniform distribution of colours, and because it is per-
ceptually uniform, i.e., the Euclidean distance between
two different colours corresponds approximately to the col-
our difference perceived by the human eye (Leon, Mery,
Pedreschi, & Leon, 2006).
Colour measurements
Colour is an important object measurement for image
understanding and object description, which can be used
for quality evaluation and inspection of food products.
The colour measurements can be conducted by visual
(human) inspection, traditional instruments like colourime-
ter, or computer vision.
Visual measurements
Qualitative visual assessment is carried out for many
operations in existing food colour inspection systems by
trained inspectors in well-illuminated rooms and sometimes
with the aid of colour atlases or dictionaries (Melendez-
Martinez et al., 2005). As a result of visual measurement,
a particular description of colour is obtained using a certain
vocabulary (Melendez-Martinez et al., 2005). Although
human inspection is quite robust even in the presence of
changes in illumination, colour perception is subjective,
variable, laborious, and tedious, has poor colour memory
of subjects, depends upon lighting and numerous other fac-
tors, and is not suitable for routine large-scale colour mea-
surement (Hutchings, 1999; Leon et al., 2006; McCaig,
2002).
Traditional instrumental measurements
Traditional instruments, such as colourimeters and spec-
trophotometers, have been used extensively in the food in-
dustry for colour measurement (Balaban & Odabasi, 2006).
Under specied illumination environment, these instru-
ments provide a quantitative measurement by simulating
the manner in which the average human eye sees the colour
of an object (McCaig, 2002).
Colourimeters, such as Minolta chromameter; Hunter
Lab colourimeter, and Dr. Lange colourimeters, are used
to measure the colour of primary radiation sources that
emit light and secondary radiation sources that reect or
transmit external light (Leon et al., 2006; Melendez-
Martinez et al., 2005). Therefore, tristimulus values are
optically, not mathematically, obtained. Its measurement
is rapid and simple. The calibration of colourimeters is
achieved using standard tiles at the beginning of the opera-
tion (Oliveira & Balaban, 2006).
7 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
Spectrophotometers with extended spectral range which
includes the visible region (VNIR instruments) are also
widely used for colour measurement throughout the food
and agricultural industries (McCaig, 2002). Spectropho-
tometers output the spectral distribution of transmittance
or reectance of the sample. The X, Y, Z values are calcu-
lated, depending on the illuminant, the measurement geom-
etry and the observer (Hutchings, 1994).
Spectroradiometers are used for the measurement of
radiometric quantities as a function of wavelength
(Melendez-Martinez et al., 2005). Tristimulus values of
both spectrophotometers and spectroradiometers are math-
ematically obtained in accordance with the CIE denitions.
Spectroradiometers have the same components as a spectro-
photometer. The difference is that spectroradiometers use
an external light source. Nowadays, spectroradiometers
have also been widely used for the quality prediction of
many food and agricultural products (Wu et al., 2009;
Wu, He, & Feng, 2008; Wu, He, Nie, Cao, & Bao, 2010).
However, although simple colour measurements can be
achieved, there are potential disadvantages in using tradi-
tional instrumental measurements (Balaban & Odabasi,
2006). One problem is that traditional instrumental mea-
surements can only measure the surface of sample that is
uniform and rather small. The sampling location and the
number of readings for obtaining an accurate average col-
our are important for traditional instrumental measurements
(Oliveira & Balaban, 2006). When the surface of sample
has nonhomogeneous colours, the measurement should be
repeated to cover the whole surface, and even so, it is still
hard to obtain the distribution map of colour. In addition,
such measurement is quite unrepresentative, making the
global analysis of the foods surface a difcult task.
Another problem is the size and shape of the sample. If
the size of sample is too small to ll the sample window,
e.g., a grain of rice or the shape of measured area is not
round, e.g., shrimp, their colour measurements may be in-
accurate if traditional instrumental measurement is made.
Moreover, in order to obtain a detailed characterization of
a food sample and thereby more precisely evaluate its qual-
ity, it is required to acquire the colour value of each pixel
within sample surface for further generating the distribution
map of colour (Leon et al., 2006). Such requirement is not
possibly achieved by using traditional instrumental mea-
surements. This in turn has increased the need for develop-
ing automatic pixel-based colour measurement process in
the food industry to replace traditional methods of human
evaluation and instrumental measurements for rapid and
non-invasive measurement of colour distribution within
food products.
Computer vision measurements
Computer vision is the science that develops theoretical
and algorithmic basis to automatically extract and analyze
useful information about an object or scene from an
observed image, image set or image sequence
(Gunasekaran, 1996; Sun, 2000; Sun, & Brosnan 2003;
Zheng, Sun, & Zheng, 2006a, b; Du, & Sun, 2006). As
an inspection and evaluation technique through electroni-
cally perceiving and evaluating an image, computer vision
has the advantages of being rapid, consistent, objective,
non-invasive, and economic. In computer vision, colour is
elementary information stored in pixels in a digital image.
Computer vision extracts quantitative colour information
from digital images by using image processing and analy-
sis, resulting in the achievement of rapid and non-contact
colour measurement. In recent years, computer vision has
been investigated for objectively measuring the colour
and other quality attributes of foods (Brosnan & Sun,
2004; Cubero et al., 2011; Du & Sun, 2004; Jackman,
Sun, & Allen, 2011). A signicant difference between com-
puter vision and conventional colourimetry is the amount of
provided spatial information. High-spatial resolution en-
ables computer vision to analyze each pixel of the entire
surface, calculate the average and standard deviation of col-
our, isolate and specify appearance, measure the nonuni-
form shapes and colours, select a region of interests
exibly, inspect more than one object at the same time,
generate the distribution map of colour, and provide a per-
manent record by keeping the picture (Balaban & Odabasi,
2006; Leon et al., 2006).
A digital image is acquired by incident light in the vis-
ible spectrum falling on a partially reective surface with
the scattered photons being gathered up in the camera
lens, converted to electrical signals either by vacuum tube
or by CCD (charge-coupled device), and saved in hard
disk for further image display and image analysis. A digital
monochrome image is a two-dimensional (2-D) light-
intensity function of I(x, y). The intensity I, generally
known as the grey level, at spatial coordinates (x, y) has
proportional relationship with the radiant energy received
by the sensor or detector in a small area around the point
(x, y) (Gunasekaran, 1996). The interval of grey level
from low to high is called a grey scale, which is numeri-
cally represented by a value between 0 (pure black) and
L (white) in common practice (Gunasekaran, 1996). Image
acquisition and image analysis are two critical steps for the
application of computer vision. Image acquisition requires
scrupulous design of image capturing system and careful
operation to obtain digital images with high quality. Image
analysis includes numerous algorithms and methods avail-
able for classication and measurement (Krutz, Gibson,
Cassens, & Zhang, 2000). The automatic colour measure-
ment using computer vision has the advantages of superior
speed, consistency, accuracy, and cost-effectiveness, and
therefore cannot only optimize quality inspection but also
help in reducing human inconsistency and subjectiveness.
Computer vision system
The hardware conguration of a computer vision system
generally consists of an illumination device, a solid-state
8 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
CCD array camera, a frame-grabber, a personal computer,
and a high-resolution colour monitor (Fig. 2).
Illumination
As an important prerequisite of image acquisition, illu-
mination can greatly affect the quality of captured image.
Different illuminants may yield different stimuli using the
same camera. A well-designed illumination system can im-
prove the accuracy, reduce the time and complexity of the
subsequent image processing steps, lead to success of im-
age analysis, and decrease the cost of an image processing
system (Du & Sun, 2004; Gunasekaran, 1996). Fluorescent
and incandescent bulbs are two widely used illuminants,
even though there are also some other useful light sources,
such as light-emitting diodes (LEDs) and electrolumines-
cent sources. Because uorescent light provides a more uni-
form dispersion of light from the emitting surface and has
more efcient inherence to produce more intense illumina-
tion at specic wavelengths, it is widely used for many
computer vision practitioners (Abdullah, 2008). Besides
the type of illuminant, the position of an illuminant is
also important. There are two commonly used geometries
for the illuminators, namely the ring illuminator and the
diffuse illuminator (Fig. 3). The ring illuminator has a sim-
ple geometry and is widely used for general purpose, espe-
cially for the samples with at surfaces. On the other hand,
the diffuse illuminator is well suited for the imaging appli-
cation of food products with sphere shape, because it pro-
vides virtually 180

of diffuse illumination.
Camera
The camera is used to convert photons to electrical
signals. CCD and CMOS (complementary metale
oxideesemiconductor) are two major types of camera,
which are both solid-state imaging devices. Due to using
a lens for imaging, pixel of central part of an image are
much more sensitive than peripheral part in CCD and
CMOS. CCD camera consists of hundreds of thousands
of photodiodes (known as pixels) that are made of light sen-
sitive materials and used to read out light energy falling on
it as an electronic charges. The charges are proportional to
the light intensity and stored in the capacitor. The CCD op-
erates in two modes, passive and active. The rst mode
transfers the charges to a bus line when received the select
signal. In the latter one, charges are transferred to a bus line
after being amplied to compensate the limited ll factor of
the photodiode. After shifting out of the detector, the elec-
trical charges are digitalized to generate the images.
Depending on various applications, CCD cameras have dif-
ferent architectures. The interline and frame-transfer are
two popularly used architectures associated with modern
digital cameras. Both interline and frame-transfer architec-
tures are competent for acquiring motion images. The inter-
line CCD uses an additional horizontal shift register to
collect and pass on the charge read out from a stack of ver-
tical linear scanners, which comprises photodiodes and
a corresponding vertical shift register. The downside to
the interline CCD is that the opaque strips on the imaging
area decreases the effective quantum efciency. The
frame-transfer design is consisted of integration and storage
frames. The integration frame acquires an image and trans-
fers the charge to the storage frame, so that the image can
be read out slowly from the storage region while the next
light signal can be integrated in the integration frame for
Fig. 2. Schematic diagram of a typical computer vision system.
Fig. 3. Two possible lighting geometries: (a) the ring illuminator; (b) the
diffuse illuminator.
9 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
capturing a new image. The disadvantage of this architec-
ture is its higher cost due to the requirement of doubled
cell area and more complex control electronics.
Although CCD is the current dominant detector for im-
age acquisition, it is anticipated that the CCD technology
will be superseded by CMOS technology in the consumer
electronics market in the near future. The CMOS image
sensor includes both photodetector and read out amplier
in each pixel (called active pixel), which is the major differ-
ence between CCD and CMOS (Litwiller, 2005). There-
fore, CMOS sensor is referred to as an Active Pixel
Sensor compared with the Passive Pixel Sensor type con-
tained in CCD arrays (Kazlauciunas, 2001). After using
photodiode to convert incident photon to electron, CMOS
converts the integrated charge to a voltage signal inside
each active pixel immediately by using a set of optically in-
sensitive transistors adjacent to the photodiode. The voltage
signals are then read out over the wires. CMOS camera can
transfer signal very fast because it has the wires inside,
compared to the vertical and horizontal registers used by
the CCD to shift the charges. Therefore, CMOS is espe-
cially suitable for the requirement of high speed imaging
for online industrial inspection. Moreover, the CMOS can
access to each particular pixel by an XeY address owing
to the addressability of the wires arranged in rows and col-
umns. So that the CMOS can extract a region of interest
from the image. Besides the characteristics of high speed
and random addressing, the CMOS has other advantages
like low cost, low power consumption, single power supply,
and small size for system integration, which makes them
prevail in the consumer electronics market (e.g., low-end
camcorders and cell phones) (Qin, 2010). In addition, in
CCD technology, signals from one pixel can be affected
by another in the same row which is termed blooming
and a poor pixel within a particular row can interfere
with signals from other rows (Kazlauciunas, 2001).
However, CMOS is immune to the blooming, because
each pixel in CMOS array is independent of other pixels
nearby. The main limit of current CMOS sensor is that
they have higher noise and higher dark current than the
CCDs, giving rise to the low dynamic range and the sensi-
tivity (Qin, 2010).
Bayer sensor and three-CCD devices (3CCD) are two
main types of colour image sensors that are differed by
the way of colour separation. Bayer sensor over the CCD
is commonly used for capturing digital colour images. It
uses a colour lter array that comprises many squares.
Each squire has four pixels with one red lter, one blue l-
ter, and two green lters, because human eye is more sen-
sitive to the green of the visible spectrum and less sensitive
to the red and blue. The missing colour can be interpolated
using a demosaicing algorithm. The shortcoming of Bayer
sensor is the colour resolution is lower than the luminance
resolution, although luminance information is measured at
every pixel. Better colour separation can be achieved by
3CCD that has three discrete image sensors and a dichroic
beam splitter prism that splits the light into red, green and
blue components. Each sensor in 3CCD responds to one of
the three colour. 3CCD has higher quantum efciency/light
sensitivity resulting in enhanced resolution and lower noise,
because 3CCD captures most of the light entering the aper-
ture, while only one-third of the light is detected by a Bayer
mask.
Frame-grabber
Besides illumination and camera, frame grabber is an-
other hardware that should be considered for image acqui-
sition. When only analogue cameras were available, frame
grabbers provided the functions of digitization, synchroni-
zation, data formatting, local storage, and data transfer
from the camera to the computer to generate a bitmap
image. A typical frame-grabber card used for analogue
cameras consists of signal-conditioning elements, an A/D
converter, a look-up table, an image buffer and a PCI bus
interface. Nowadays, digital cameras are generally used
in higher-end applications. These cameras do not need
frame grabber for digitization. Frame grabber is also not
necessary to transfer data from camera to the host com-
puter. Alternatively, cameras are available with Camera-
Link, USB, Ethernet and IEEE 1394 (FireWire)
interfaces that simplify connection to a PC. Nevertheless,
frame grabbers are still alive well, but they are different
than what they used to be. Their role today has become
much broader rather than just image capture and data trans-
fer. Modern frame grabbers now include many of the spe-
cial features, such as acquisition control (trigger inputs
and strobe outputs), I/O control, tagging incoming images
with unique time stamps, formatting data from multitap
cameras into seamless image data, image correction and
processing such as Bayer inversion lters, image authenti-
cation and ltering, and communications related to perfor-
mance monitoring.
Colour space transformations
There are three aspects that determine colour, namely
the type of emission source that irradiates an object, the
physical properties of the object itself (which reects
the radiation consequently detected by the sensor), and
the in-between medium (e.g., air or water) (Menesatti
et al., 2012). In general, a computer vision system captures
the colour of each pixel within the image of the object us-
ing three colour sensors (or one sensor with three alternat-
ing lters) per pixel (Forsyth & Ponce, 2003; Segnini,
Dejmek, & Oste, 1999a). RGB model is the most often
used colour model, in which each sensor captures the inten-
sity of the light in the red (R), green (G) or blue (B) spec-
trum, respectively (Leon et al., 2006). However, the RGB
model is device-dependent and not identical to the intensi-
ties of the CIE system (Mendoza & Aguilera, 2004).
Another problem of RGB model is that it is not a perceptu-
ally uniform space. The differences between colours (i.e.,
Euclidean distances) in RGB space do not correspond to
10 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
colour differences as perceived by humans (Paschos, 2001).
Standard RGB (sRGB; red, green, blue) and L*a*b* are
commonly applied in quantifying standard colour of food
(Menesatti et al., 2012). sRGB is a device independent col-
our model whose tristimulus values (sR, sG, sB) reproduce
the same colour on different devices, and represent linear
combinations of the CIE XYZ. Therefore, It is used to dene
the mapping between RGB (no-linear signals) from a com-
puter vision system and a device-independent system such
as CIE XYZ (Mendoza, Dejmek, & Aguilera, 2006). sRGB
is calculated based on D65 illumination conditions, RGB
values measured by computer vision, and a power function
with a gamma value of 2.4. The camera sensors (e.g., CCD
or CMOS) generate outputs signals and the rendering is
device-dependent, since the display device specications
have different ranges of colour. In order to overcome this
problem, sRGB values are often transformed to other colour
spaces such L*a*b* (Menesatti et al., 2012). Moreover,
even the result of such transformation is device-dependent
(Ford & Roberts, 1998). In many researches, a linear trans-
form that denes a mapping between RGB signals from
a computer vision camera and a device independent system
such as L*a*b* and L*u*v* was determined to ensure the
correct colour reproduction (Mendoza & Aguilera, 2004;
Paschos, 2001; Segnini et al., 1999a). However, such trans-
form that converts RGB into L*a*b* units does not consider
calibration process, but only uses an absolute model with
known parameters. Because the RGB colour measurement
depends on external factors (sensitivity of the sensors of
the camera, illumination, etc.), most cameras (even of the
same type) do not exhibit consistent responses (Ilie &
Welch, 2005). The parameters in the absolute model vary
from one case to another. Therefore, the conversion from
RGB to L*a*b* cannot be done directly using a standard
formula (Leon et al., 2006). For this reason, Leon et al.
(2006) present a methodology to transform device depen-
dent RGB colour units into device-independent L*a*b* col-
our units. Five models, namely direct, gamma, linear,
quadratic and neural network, were used to carry out the
transformation of RGB to L*a*b* to make the values deliv-
ered by the model are as similar as possible to those deliv-
ered by a colourimeter over homogenous surfaces. The best
results with small errors (close to 1%) were achieved with
the quadratic and neural network model. However, although
the methodology presented is general, i.e., it can be used in
every computer vision system, it should be noticed that the
results obtained after the calibration for one system (e.g.,
system A) cannot be used for another system (e.g., system
B). A new calibration procedure needs to be conducted for
a new computer vision system (Leon et al., 2006).
Colour calibration methods
The quality of digital image is principally dened by its
reproducibility and accuracy (Prasad & Roy, 2003).
Without reproducibility and accuracy of images, any at-
tempt to measure colour or geometric properties is of little
use (Van Poucke, Haeghen, Vissers, Meert, & Jorens,
2010). In general, a computer vision camera employs a sin-
gle array of light-sensitive elements on a CCD chip, with
a lter array that allows some elements to see red (R),
some green (G) and some blue (B). White balance is con-
ducted to measure relative intensities manually or automat-
ically (Mendoza et al., 2006). A digital colour image is then
generated by combining three intensity images (R, G, and
B) in the range 0e255. As being device-dependent, RGB
signals produced by different cameras are different for
the same scene. These signals will also change over time
as they are dependent on the camera settings and scenes
(Van Poucke et al., 2010). Therefore, measurements of col-
our and colour differences cannot be conducted on RGB im-
ages directly. On the other hand, different light sources
present different emission spectra dominated by diverse
wavelengths that affect those reected by the object under
analysis (Costa et al., 2009). Therefore, in order to mini-
mize the effects of illuminants and camera settings, colour
calibration prior to photo/image interpretation is required in
food processing to quantitatively compare samples colour
during workow with many devices (Menesatti et al.,
2012). sRGB is a device-independent colour space that
has relationship with the CIE colourimetric colour spaces.
Most of the variability introduced by the camera and illumi-
nation conditions could be eliminated by nding the rela-
tionship between the varying and unknown camera RGB
and the sRGB colour space (Van Poucke et al., 2010). Dif-
ferent calibration algorithms dening the relationship be-
tween the input RGB colour space of the camera and the
sRGB colour space have been published using various
methods (Van Poucke et al., 2010). Several software are
available to perform colour calibration using a colour pro-
le assignable to the image that deals with different devices
(e.g., ProleMaker, Monaco Proler, EZcolour, i1Extreme
and many others), but they are often too imprecise for sci-
entic purposes. Therefore, polynomial algorithms, multi-
variate statistics, neural networks, and their combinations
are proposed for the colour calibration (Menesatti et al.,
2012). Mendoza et al. (2006) transferred RGB into sRGB
according to IEC 61966-2-1 (1999) for the colour measure-
ments of agricultural foods. Costa et al. (2009) compared
three calibration systems, namely partial least squares
(PLS), second order polynomial interpolation (POLY2),
and ProleMaker Pro 5.0 software (PROM) under eight dif-
ferent light conditions. Results show that PLS and POLY2
achieved better calibration with respect to the conventional
software (PROM). Van Poucke et al. (2010) used three 1D
look-up tables and polynomial modelling to ensure repro-
ducible colour content of digital images. A reference
chart called the MacBeth Colour Checker Chart Mini
[MBCCC] (GretagMacBeth AG, Regensdorf, Switzerland)
was used in the colour target-based calibration by trans-
forming the input RGB colour space into the sRGB colour
space. Gurbuz, Kawakita, and Ando (2010) proposed a col-
our calibration method for multi-camera systems by
11 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
utilizing a set of robustly detected stereo correspondences
between camera pairs, resulting in a 3 4 coefcient ma-
trix multiplier that can be used for colour calibration. Costa
et al. (2012) calibrated digital images of whole gilthead
seabream using a PLS approach with a standard colour
chart. Recently, Menesatti et al. (2012) applied the 3D
Thin-Plate Spline warping approach to calibrate colours
in sRGB space. The performance of this method was com-
pared with other two common approaches,
namely commercial calibration system (ProleMaker) and
partial least square analysis under two different cameras
and four different light conditions. Compared to the com-
mercial methods (ProleMaker) and the multivariate PLS
approach, the Thin-Plate Spline approach signicantly
diminished both, distances from the reference and the
inter-distances setup experiment, and was the most robust
against lighting conditions and sensor typology
(Menesatti et al., 2012).
Colour constancy and illumination estimation
Colour constancy is the phenomenon by which per-
ceived object colour tends to stay constant under changes
in illumination (Ling & Hurlbert, 2008). Colour constancy
is not a property of objects; it is a perceptual phenomenon,
the result of mechanisms in the eye and brain (Hurlbert,
2007). Colour constancy is important for object recogni-
tion, scene understanding, image reproduction as well as
digital photography (Li, Xu, & Feng, 2009). There are three
factors affecting the image recorded by a camera, namely
the physical content of the scene, the illumination incident
on the scene, and the characteristics of the camera
(Barnard, Cardei, & Funt, 2002). An object can appear dif-
ferent colour under changing colour. The objective of com-
putational colour constancy is to nd a nontrivial illuminant
invariant description of a scene from an image taken under
unknown lighting conditions, either by directly mapping
the image to a standardized illuminant invariant representa-
tion, or by determining a description of the illuminant
which can be used for subsequent colour correction of the
image (Barnard, Cardei et al., 2002). The procedure of
computational colour constancy includes two steps: esti-
mating illumination parameters and using these parameters
to get the objects colour under a known canonical light
source (Li et al., 2009). The rst step, illumination estima-
tion, is important in colour constancy computation (Li
et al., 2009). So far, a number of leading colour constancy
algorithms were proposed that focus on the illumination es-
timation (Li et al., 2009). These algorithms can be gener-
ally divided into two major groups: unsupervised
approaches and supervised ones. The algorithms falling in
the rst category include Max RGB, grey world algorithm,
Shades of grey (SoG), and Grey surface identication
(GSI). The other colour constancy category includes those
training-based solutions, such as Bayesian colour con-
stancy, Neural Network method, support vector regression.
Recently, Shi, Xiong, and Funt (2011) proposed a method
called thin-plate spline interpolation to estimate the colour
of the incident illumination. The resulted illumination-
estimation can be used to provide colour constancy under
changing illumination conditions and automatic white bal-
ancing for digital cameras (Shi et al., 2011). A review of
these algorithms and their comparison can be found else-
where (Barnard, Cardei et al., 2002; Barnard, Martin,
Coath, & Funt, 2002).
Applications
Nowadays, computer vision has found extraordinary in-
terests as a key inspection method for non-destructive and
rapid colour measurement of food and food products. If
implemented in processing lines, computer vision systems
will provide precise inspection and increase throughput in
the production and packaging process. Table 1 summarizes
applications of using computer vision for food colour
evaluation.
Meat and sh
Beef
Freshness is an important factor for consumers to buy
meat (Maguire, 1994). Red and bright red lean is asso-
ciated by consumers with fresh beef, while brownish colour
is considered to be an indicator of state or spoiled beef
(Larrain, Schaefer, & Reed, 2008). Colourimeters have
been intensively studied for determining colour differences
of fresh meat using various CIE colour expressions, such as
lightness (L*), redness (a*), yellowness (b*), hue angle,
and chroma (Larrain et al., 2008). However, these works
have limitation of scanning a small surface area. Computer
vision is considered as a promising method for predicting
colour of meat (Mancini & Hunt, 2005; Tan, 2004). Back
in the eighties of the last century, computer vision has
been used to detect colour changes during cooking of
beef ribeye steaks (Unklesbay, Unklesbay, & Keller,
1986). The mean and standard deviation of the red, green
and blue colours were found to be sufcient to differentiate
between 8 of 10 classes of steak doneness. Later, Gerrard,
Gao, and Tan (1996) determined muscle colour of beef
ribeye steaks using computer vision. Means of red and
green (m
R
and m
G
) were signicant (Coefcient of Determi-
nation (R
2
) 0.86) for the prediction of colour scores
which were determined using USDA lean colour guide.
In order to improve the results, Tan, Gao, and Gerrard
(1999) used fuzzy logic and articial neural network tech-
niques to analyze the colour scores and a 100% classica-
tion rate was achieved. In another work, Larrain et al.
(2008) applied computer vision to estimate CIE colour co-
ordinates of beef as compared to a colourimeter. In their
work, CIE L*, a*, and b* were measured using a colourim-
eter (Minolta Chromameter CR-300, Osaka, Japan) with
a 1 cm aperture, illuminant C and a 2

viewing angle.
RGB values obtained from computer vision were trans-
formed to CIE L*a*b* colour spaces using the following
12 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
Table 1. Summary of computer vision applications for food colour evaluation.
Category Application Accuracy References
Beef Detection of colour changes during
cooking
Unklesbay et al., 1986
Prediction of colour scores R
2
0.86 Gerrard et al., 1996
Prediction of sensory colour responses 100% Tan et al., 1999
Estimation of CIE colour coordinates as
compared to a colourimeter
R
2
0.58 for L*
R
2
0.96 for a*
R
2
0.56 for b*
R
2
0.94 for hue angle
R
2
0.93 for chroma
Larrain et al., 2008
Prediction of ofcial colour scores 86.8% using MLR
94.7% using SVM
Sun et al., 2011
Pork Evaluation of fresh pork loin colour R 0.52 using MLR
R 0.75 using NN
Lu et al., 2000
Prediction of colour scores 86% Tan et al., 2000
Prediction of the sensory visual quality OSullivan et al., 2003
Fish Detection of colour change Oliveira & Balaban, 2006
Colour measurement as compared to
a colourimeter
Yagiz et al., 2009
Prediction of colour score assigned by
a sensory panel
R 0.95 Quevedo et al., 2010
Prediction of colour score as compared to
the Roche cards and a colourimeter
Similar to Roche SalmoFan
linear ruler
Misimi et al., 2007
Orange Juice Colour evaluation R 0.96 for hue
R 0.069 for chroma
R 0.92 for lightness
Fernandez-Vazquez et al., 2011
Wine Measurement of colour appearance R
2
0.84 for lightness compared to
visual estimates
R
2
0.89 for colourfulness compared
to visual estimates
R
2
0.98 for hue compared to visual
estimates
R
2
0.99 for lightness compared to
spectroradiometer
R
2
0.90 for colourfulness compared
to spectroradiometer
R
2
0.99 for hue compared to
spectroradiometer
Martin et al., 2007
Beer Determination of colour as compared to
the colourimetry
Sun et al., 2004
Potato chip Colour measurement as compared to two
colourimeters
Scanlon et al., 1994
Colour measurement as compared by the
sensory assessors
Segnini et al., 1999a
Colour measurement as compared by the
sensory assessors
R > 0.79 between L* and most of the
sensory colour attributes
Segnini et al., 1999b
Colour measurement as compared by the
sensory assessors
R 0.9711 using linear model for
smooth potato chips
R 0.9790 using quadratic model for
smooth potato chips
R 0.7869 using linear model for
smooth potato chips
R 0.8245 using quadratic model for
smooth potato chips
Pedreschi et al., 2011
Development of an computer vision
system to measure the colour of potato
chips
Pedreschi et al., 2006
Wheat Measurement of the colour of the seed
coat as compared to the
spectrophotometer
High linear correlations ( p < 0.05) Zapotoczny & Majewska, 2010
Banana Measurement of the colour as compared
to a colourimeter
R
2
0.80 for L*
R
2
0.97 for a*
R
2
0.61 for b*
Mendoza & Aguilera, 2004
MLR: Multiple linear regression.SVM: Support vector machine.R: Correlation coefcient.R
2
: Coefcient of determination.
13 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
steps. RGB was rst converted to XYZ
D65
using the matrix
transform (Pascale, 2003):
2
4
X
D65
Y
D65
Z
D65
3
5

2
4
0:4125 0:3576 0:1804
0:2127 0:7152 0:0722
0:0193 0:1192 0:9503
3
5

2
4
R
G
B
3
5
1
The obtained XYZ
D65
was then converted to XYZ
C
using
the Bradford matrix transform (Pascale, 2003):
2
4
X
C
Y
C
Z
C
3
5

2
4
1:0095 0:007 0:0128
0:0123 0:9847 0:0033
0:0038 0:0072 1:0892
3
5

2
4
X
D65
Y
D65
Z
D65
3
5
2
Finally, XYZ
C
was converted into CIE
C
L*a*b* using the
following equations (Konica Minolta, 1998):
L

116 Y=Y
n

1=3
16
a

500
h
X=X
n

1=3
Y=Y
n

1=3
i
b

200
h
Y=Y
n

1=3
Z=Z
n

1=3
i
3
where X
n
, Y
n
, and Z
n
are the values for X, Y, and Z for the
illuminant used, in this case 0.973, 1.000, and 1.161 respec-
tively. Also, (X/X
n
)
1/3
was replaced by [7.787 (X/
X
n
) 16/116] if X/X
n
was below 0.008856; (Y/Y
n
)
1/3
was
replaced by [7.787 (Y/Y
n
) 16/116] if Y/Y
n
was
below 0.008856; and (Z/Z
n
)
1/3
was replaced by
[7.787 (Z/Z
n
) 16/116] if Z/Z
n
was below 0.008856
(Konica Minolta, 1998). When L*a*b* were transformed,
hue angle and chroma were calculated from a* and b*
values. Regressions of colourimeter on computer vision
for a*, hue angle and chroma had R
2
values of 0.96,
0.94, and 0.93, while there were only 0.58 and 0.56 of R
2
for L* and b*. Recently, Sun, Chen, Berg, and Magolski
(2011) analyzed 21 colour features obtained from images
of fresh lean beef for predicting ofcial beef colour scores.
Multiple linear regression (MLR) had the correct percent-
age of 86.8% of beef colour muscle scores and better per-
formance of 94.7% was achieved using support vector
machine (SVM), showing that computer vision technique
can provide an effective tool for predicting colour scores
of beef muscle.
Pork
In addition to beef colour, fresh pork colour was also
evaluated by using computer vision (Tan, 2004). Early
work was carried out by Lu, Tan, Shatadal, and Gerrard
(2000), who applied computer vision to evaluate fresh
pork loin colour. Colour image features analyzed in this
study included mean (m
R
, m
G
and m
B
) and standard devia-
tion (s
R
, s
G
and s
B
) of red, green, and blue bands of the
segmented muscle area. Both MLR and neural network
(NN) models were established to determine the colour
scores based on the images features as inputs. The correla-
tion coefcient between the predicted and the sensory col-
our scores was 0.52 in MLR model, with 84.1% of the 44
pork loin samples having prediction errors lower than 0.6,
which was considered negligible from a practical view-
point. For the NN model, 93.2% of the samples had predic-
tion errors of 0.6 or lower, with the correlation coefcient
of 0.75. Results showed that a computer vision system is
an efcient tool for measuring sensory colour of fresh
pork. Later, Tan, Morgan, Ludas, Forrest, and Gerrard
(2000) used computer vision to predict colour scores of
fresh loin chops, which were visually assessed by an un-
trained panel in three separated studies. After training
pork images classied by the panel, the computer vision
system was capable of classifying pork loin chops up to
86% agreement with visually assessed colour scores. In an-
other study, the effectiveness of computer vision and col-
ourimeter was compared in predicting the sensory visual
quality of pork meat patties (M. longissimus dorsi) as deter-
mined by a trained and an untrained sensory panel
(OSullivan et al., 2003). Compared to the colourimeter,
computer vision had a higher correction with the sensory
terms determined by both trained and untrained sensory
panelists. This was due to the fact that the entire surface
of sample was measured by computer vision and therefore
computer vision took a more representative measurement
compared to the colourimeter.
Fish
Consumers commonly purchase sh based on visual ap-
pearance (colour). Gormley (1992) found that consumers
associate colour of sh products with the freshness of
a product having better avour and higher quality. Colour
charts, such as SalmonFan card (Hoffmann-La Roche
Basel, Switzerland), are generally used for colour assess-
ment in the sh industry (Quevedo et al., 2010). However,
such measurement is laborious, tedious, subjective, and
time-consuming. Quevedo et al. (2010) developed a com-
puter vision method to assign colour score in salmon llet
according to SalmonFan card. The computer vision sys-
tem was calibrated in order to obtain L*a*b* from RGB us-
ing 30 colour charts and 20 SalmonFan cards. Calibration
errors for L*a*b* were 2.7%, 1%, and 1.7%, respectively,
with a general error range of 1.83%. On the basis of the
calibrated transformation matrix, a high correlation coef-
cient of 0.95 was obtained between the SalmonFan score
assigned by computer vision and the sensory panel. Good
results showed the potential of using computer vision tech-
nique to qualify salmon llets based on colour. In another
study, Misimi, Mathiassen, and Erikson (2007) compared
the results of computer vision with the values determined
manually using the Roche SalmonFan lineal ruler and
Roche colour card. The results demonstrated that the com-
puter vision method had the good evaluation of colour as
the Roche SalmoFan linear ruler. This study also found
that the colour values generated by the chromameter had
large deviations in mean value to those generated by the
computer vision. This was due to the brighter illumination
used by the computer vision setup and the different algo-
rithms used to convert RGB into L*a*b* for two methods
14 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
(Misimi et al., 2007). The comparison of the performance
of a computer vision system and a colourimeter was also
compared to measure the colour of uncooked llets from
Gulf of Mexico sturgeons fed three different diets, during
storage on ice for 15 days (Oliveira & Balaban, 2006). In
order to do the comparison, DE values were calculated
from the L*a*b* values measured using both the computer
vision system and the colourimeter. The DE value was used
to measure the total colour change, which was calculated
by the following function:
DE

L
o
L
i

2
a
o
a
i

2
b
o
b
i

2
q
4
where, the subscript o refers to the values at time 0, and i
refers to values at 5, 10, or 15 days. DE values determined
using computer vision showed colour change over storage
time, which was in accordance with mild colour changes
visually observed in the images of the centre slices of the
sturgeon llets. However, it was hard to nd such colour
change using the colourimeter. Moreover, there was signif-
icant difference of DE values (p < 0.05) between instru-
ments, except for day 0. The difference could be due to
the different average daylight illuminants used, namely
D65 with a colour temperature of 6504 K for the colourim-
eter and D50 with a colour temperature of 5000 K for the
machine vision system. Similarly, Yagiz, Balaban,
Kristinsson, Welt, and Marshall (2009) compared a Minolta
colourimeter and machine vision system in measuring col-
our of irradiated Atlantic salmon. There were signicantly
higher readings obtained by the computer vision system for
L*, a*, b* values than the Minolta colourimeter. Visual
comparison was then conducted to illustrate the actual col-
ours to evaluate the measurements of the two instruments.
The colour represented by the computer vision system
was much closer to the average real colour of Atlantic
salmon llets, while that measured using the colourimeter
was purplish based on average L*, a*, b* values (Fig. 4).
The differences between colours measured by computer
vision and colourimeter in this study (Yagiz et al., 2009)
were similar to those of the study carried out by Oliveira
and Balaban (2006). However, unlike Oliveira and
Balaban (2006) used different illuminants, Yagiz et al.
(2009) used the same illuminant, i.e., D65 with a colour
temperature of 6504 K, for both instruments. In addition,
the standard red plates they used for the calibration of
two instruments had the similar L*, a*, b* values. Hence,
the authors (Yagiz et al., 2009) recommended caution in re-
porting colour values measured by any system, even when
the reference tiles were measured correctly. There are var-
ious factors that can affect the colour readings, such as the
surface roughness and texture, the amount of surface
shine, the geometry of the measuring instrument. It is rec-
ommended to visually compare the colour formed by the
L*, a*, b* values read from any device with the observed
colour of the sample.
Liquid food products
Orange juice
Some studies have revealed that the colour of orange
juice is related to the consumers perception of avour,
sweetness and other quality characteristics (Fernandez-
Vazquez, Stinco, Melendez-Martinez, Heredia, & Vicario,
2011). Colour is found to inuence sweetness in orange
drinks and affects intensity of typical avour in most fruit
drinks (Bayarri, Calvo, Costell, & Duran, 2001). Instead
of subjective visual evaluation, traditional instruments
such as colourimeter have been used for the objective col-
our evaluation of orange juice (Melendez-Martinez et al.,
2005). New advances in computer vision offer the possibil-
ity of evaluating colour in terms of millions of pixels at rel-
atively low cost. Fernandez-Vazquez et al. (2011) explored
the relationship between computer vision and sensory eval-
uation of the colour attributes (lightness, chroma and hue)
in orange juices. Hue (R 0.96) and lightness
(R 0.92) were well correlated between panelists colour
evaluation and the image values but not chroma
(R 0.069). The poor measurement of chroma was proba-
bly due to the fact that it is not an intuitive attribute.
Alcoholic beverage
Colour, which is one of the main parameters of the qual-
ity of wines, affects the determination of aroma, odour, va-
riety, and the overall acceptability by consumers (Martin,
Ji, Luo, Hutchings, & Heredia, 2007). Martin et al.
(2007) measured colour appearance of red wines using
a calibrated computer vision camera for various wines
with reference to the change of depth. The results from
computer vision had good correlations with visual esti-
mates for lightness (R
2
0.84), colourfulness
(R
2
0.89), and hue (R
2
0.98) and with a Minolta
CS-1000 tele-spectroradiometer (R
2
0.99 for lightness,
R
2
0.90 for colourfulness, and R
2
0.99 for hue). In an-
other study, Sun, Chang, Zhou, and Yu (2004) investigated
computer vision for determining beer colour as compared
Fig. 4. Colour representations of Minolta and machine vision reading
results and actual pictures of differently treated salmon llets and stan-
dard red plate (Yagiz et al., 2009).
15 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
to the European Brewery Convention (EBC) colourimetry.
A high positive correlation was found between colours
measured by computer vision and those determined by us-
ing spectrophotometry and colourimetry, demonstrating the
feasibility of determining beer colour using computer
vision. The computer vision was highly repeatable with
a standard deviation of zero for measuring the colour of
beer.
Other applications
Colour of potato chips is an important attribute in the
denition of the quality for the potato processing industry
and it is strictly related to consumer perception
(Pedreschi, Leon, Mery, & Moyano, 2006; Segnini,
Dejmek, & Oste, 1999b; Pedreschi, Bunger, Skurtys,
Allen, & Rojas, 2012). As an early research, Scanlon,
Roller, Mazza, and Pritchard (1994) used computer vision
to characterise colour of chips. On the basis of mean grey
level values from specic regions of potato chips, it was
feasible to distinguish differences in chip colour from pota-
toes stored at the two temperatures and to discriminate dif-
ferent frying times for potato chips that had been stored at
5

C. Good relationships were obtained between colour as-
sessed by mean grey level and colour measured by the Ag-
tron M31A colour meter and Hunter Lab D25L-2
colourimeter. Later, Segnini et al. (1999a) developed
a new, easy and inexpensive procedure to quantify the col-
our of potato chips by using computer vision technique.
There was a clear relationship between the obtained L*,
a*, or b* and the scale by human eyes. The method had
less inuence from the undulating surface of the chips
and was not sensitive to light intensity. In another study,
Segnini et al. (1999b) investigated the potential of using
computer vision for measuring colour of commercial potato
chips as compared to sensory analysis. There was a good
relationship (R > 0.79) between L* and most of the sensory
colour attributes, which include yellow colour, burnt as-
pect and sugar coloured aspect. The a* attribute also
showed a good relationship with burnt aspect, while the
b* attribute did not signicantly correlate with any of the
sensory parameters ( p > 0.05). Recently, Pedreschi, Mery,
Bunger, and Yanez (2011) established the relationships be-
tween colour measured by the sensory assessors and the
colour determined objectively in L*, a*, b* units by a com-
puter vision system. Good relationships were found for
smooth potato chips using both linear (0.9711) and qua-
dratic (0.9790) models, while undulated chips only had R
of 0.7869 and 0.8245 suing linear and quadratic methods,
respectively.
Zapotoczny and Majewska (2010) measured the col-
our of the seed coat of wheat kernels using computer vi-
sion. The colour of the seed coat was saved in RGB space
after image acquisition, and was then transformed into
XYZ and L*a*b*, which enabled the computation of the
hue and saturation of colour. After image analysis, high
linear correlations ( p < 0.05) were found between colour
measurements of the seed coat performed by computer
vision and spectrophotometer. The results of this study
showed that the colour of the seed coat of wheat kernels
can be determined by computer vision instead of
spectrophotometry.
Mendoza and Aguilera (2004) implemented computer
vision to measure the colour of bananas with different rip-
ening stages. There was a good correlation (R
2
0.97) be-
tween a* values obtained with the computer vision system
and the Hunter Lab colourimeter, while smaller correlation
coefcients were obtained for L* (R
2
0.80) and b*
(R
2
0.61) values. This difference between two methods
was mainly due to the fact that measurements with the col-
ourimeter did not extend over the whole surface of the ba-
nanas which had nonhomogeneous colours during ripening,
in particular at the ends of the bananas. On the other hand,
the computer vision system is possible to assess the overall
colour change during ripening, similar to human percep-
tion. Recently, Hashim et al. (in press) used computer vi-
sion to detect colour changes in bananas during the
appearance of chilling injury symptoms. The raw RGB
values obtained were transformed to normalized rgb and
CIEL*a*b* space to remove the brightness from the colour
and to distinguish the colour similar to human perception.
Results show that the r and g in normalized rgb colour
space have strong correlation with visual assessment.
Quantication of colour nonhomogeneity
Colour nonhomogeneity is an important appearance at-
tribute and its quantitative measurement is required for
most food products which have nonuniform colours.
However, colourimeters fail for nonuniform colours be-
cause only the average colour of food products can be
measured by colourimeters. For this reason, Balaban
(2008) applied computer vision technique to quantify uni-
form or nonuniform colours of food products. Several im-
age analysis methods were applied, which included
colour blocks, contours, and colour change index (CCI).
The calculation of colour blocks included three steps:
rstly, the number of colours in the RGB colour space
was reduced by dividing each colour axis into either 4
(4 4 4 64 colour blocks) or 8 (8 8 8 512 col-
our blocks) or 16 (16 16 16 4096 colour blocks);
secondly, the number of pixels that fall within a colour
block is counted, and the percentage of that colour was cal-
culated based on the total view area (total number of pixels)
of the object; and nally, an appropriate threshold was set
to consider only those colour blocks that have percent areas
above that threshold. On the basis of the set threshold, the
higher the number of colour blocks, the more nonhomoge-
neous the colour is.
The calculation of colour contours included two steps:
rstly, colour attributes lower than, or higher than a given
threshold, or attributes between two thresholds were
identied; secondly, the percentage of pixels within con-
tours based on the total view area of an object was
16 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
calculated. The colours of defective areas, such as dark
spots could be quantied based on the calculation of
contours.
The calculation of CCI was based on colour primitives,
which are several continuous areas of an image where the
intensity of any pixel is within a given threshold value.
The more colour primitives in an image, the more nonho-
mogeneous the colour of that object is. The calculation
function of CCI was proposed as follows:
CCI
P
DI for all neighoring pixels
P
distances between equivalent circles

number of neighbors
object area
100 5
The results of the study by Balaban (2008) showed that
the colour blocks method was competent for the case if the
range of hue values is large such as mangoes; and the CCI
method did well if the hue range is narrow as in the case of
rabbit samples.
Furthermore, because it is not easy to quantify nonho-
mogeneous colours by sensory panels, most researches
were conducted in the comparison and correlation of homo-
geneous colour measurements between computer vision
and instrumental or visual colour analysis. For this reason,
Balaban, Aparicio, Zotarelli, and Sims (2008) proposed
a method to quantify the perception of nonhomogeneous
colours by sensory panelists and compared the differences
in colour evaluation between a computer vision system
and sensory panels for the perception of nonhomogeneous
colours. Generally, the more nonuniform the colour of
a sample, the higher the error of a panelist was in quantify-
ing the colour of a sample, which showed that panelists had
more difculty in evaluating more nonhomogeneous col-
ours. Moreover, no signicant difference in DE values
was found between panelists errors based on evaluating
the real fruit and evaluating its image (Balaban et al.,
2008). Therefore, images can be used to evaluate colour in-
stead of the real samples, which may be signicant, since
visual evaluation of images eliminates temporal and geo-
graphical restrictions, especially for the evaluation of per-
ishable foods. In addition, images can be transferred
electronically to distant places and stored much longer
than the food, which allows much more exibility in the
analysis of visual attributes of food products.
Development of computerized colour measurement
system
Nowadays, computer vision technique has been used on
a production line or in the quality control lab. Several
works have been carried out to develop computerized col-
our measurement systems. Kihc, Onal-Ulusoy, Yildirim,
and Boyaci (2007) designed a computerized inspection sys-
tem that uses a at-bed scanner, a computer, and an algo-
rithm and graphical user interface coded and designed in
Matlab 7.0 to determine food colour based on CIE
L*a*b* colour space. The USA Federal Colour Standard
printouts (SP) comprised of 456 different colours were
used to train and test the articial neural network (ANN)
integrated the system. High correlations were obtained be-
tween the results estimated from the computer vision sys-
tem and those obtained from a spectrophotometer for test
images data set. R
2
values were 0.991, 0.989, and 0.995
for L*, a*, and b*, respectively. When various food samples
were used to evaluate the performance of the system,
a good agreement was also found between colour measured
using the system and the spectrophotometer (R
2
values
were 0.958, 0.938, and 0.962 for L*, a*, and b*, respec-
tively). The mean errors of 0.60% and 2.34% obtained
respectively for test and various food samples showed the
feasibility of using computer vision for the measurement
of food colour instead of spectrophotometer. In another
work, Pedreschi et al. (2006) designed and implemented
an inexpensive computer vision system to measure repre-
sentatively and precisely the colour of potato chips in
L*a*b* units from RGB images. The system had the func-
tions of image acquisition, image storage, image pre-
processing, object segmentation, feature extraction, and
colour transformation from RGB to L*a*b* units. The sys-
tem allowed the measurements of the colour over the entire
surface of a potato chip or over a small specic surface re-
gion of interest in an easy, precise, representative, objective
and inexpensive way. There are also some other commer-
cial systems available for food colour measurement, such
as QualiVision system (Dipix Technologies, Ottawa,
Ontario, Canada), Lumetech Optiscan system (Koch Lume-
tech, Kansas City, Mo., USA), Model L-10 Vision Weigher
(Marel, Reykjavik, Iceland), Parasensor system (Precarn,
Ottawa, Canada), Prophecy 550 system (Imaging Technol-
ogy, Bedford, Mass.), and SINTEF system (SINTEF, Oslo,
Norway) (Balaban & Odabasi, 2006).
Advantages and disadvantages of using computer
vision
Many reviews have concluded the advantages and disad-
vantages of computer vision (Brosnan & Sun, 2004; Du &
Sun, 2004; Gumus, Balaban, & Unlusayin, 2011).
Especially for food colour measurement, the main advan-
tages of applying computer vision technique include:
The rapidness, preciseness, objectiveness, efciency,
consistency, and non-destruction of the measurement
of colour data with low cost and no sample pretreatment;
The ability of providing high spatial resolution, analyzing
each pixel of the surface of a food product, extracting
more colour features with spatial information, analyzing
the whole food even it is of small or irregular shape and
of nonuniform colours, and selecting a region of interest,
and generating the distribution map of colour;
The automation of mass labour intensive operations and
reduction of tedious and subjective human visual in-
volvement; and
17 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
The availability of rapid generation of reproducible re-
sults and a permanent storage of colour data for further
analysis by keeping the picture.
Although computer vision has the aforementioned ad-
vantages, it does have some disadvantages (Brosnan &
Sun, 2004; Gumus et al., 2011):
The difculties encountered with objects that are dif-
cult to separate from the background, overlapping ob-
jects, or when both sides of a food need to be evaluated;
The requirement of careful calibration and setting of
camera and well-dened and consistent illumination
(such as a light box, where the light intensity, spectrum
and direction are all controlled); and
The possible variation of intensity and the spectrum of
light bulbs over time (Balaban & Odabasi, 2006).
Conclusions and future trends
This review covers fundamentals and typical applica-
tions of computer vision in food colour measurement. As
a science-based automated food inspection technique, com-
puter vision has been proved to be efcient and reliable for
colour measurement with the capabilities not possible with
other methods, especially the ability for analyzing food
samples with nonhomogeneous colours, shapes, and sur-
faces. The colour measurement of using computer vision
is repeatable and exible, permits the plant application
with high throughput and accuracy and a relatively low
cost, and allows human visual inspectors to focus on
more demanding and skilled jobs, instead of undertaking
tedious, laborious, time-consuming, and repetitive inspec-
tion tasks. Moreover, besides colour measurement, com-
puter vision allows evaluation of other quality attributes,
such as shape, size, orientation, defects, and nutrition.
Based on the combination of these attributes, computer vi-
sion offers the possibility of designing inspection systems
for the automatic grading and quality determination of
food products. On the basis of computer vision, it is feasi-
ble to reduce industrial dependence on human graders, in-
crease production throughput, decrease production cost,
improve product consistency and wholesomeness, and
enhance public condence in the safety and quality of the
food products.
On the other hand, despite the above great research
efforts on colour measurement of food products using com-
puter vision, there are still many challenges remain to de-
sign a computer vision system that has sufcient
exibility and adaptability to handle the biological varia-
tions in food products. Further in-depth research is required
on system robustness, real-time capability, sample han-
dling, and standardization, which also create many future
research opportunities. Some difculties arise from the seg-
mentation algorithms, which is a prerequisite to the success
of all subsequent operations leading to successful computer
vision-based colour measurement without human interven-
tion. Due to the complex nature of food images, no existing
algorithm is totally effective for food-image segmentation.
The development of efcient and robust calibration is also
required to reduce the inuence from the change of camera,
illumination, and environment. Besides image process algo-
rithms, the development in hardware and software of com-
puter vision system is also critical to measure colour of
food products rapidly and accurately. A faster, lighter/
smaller, and less expensive hardware can decrease image
acquisition and analysis time, improve the speed and space
of storage, and increase the image resolution for detailed
colour measurement.
Acknowledgements
The authors would like to acknowledge the nancial
support provided by the Irish Research Council for Science,
Engineering and Technology under the Government of
Ireland Postdoctoral Fellowship scheme.
References
Abdullah, M. Z. (2008). Image acquisition systems. In D.-W. Sun (Ed.),
Computer vision technology for food quality evaluation. Elsevier,
San Diego, California, USA: Academic Press.
Balaban, M. O. (2008). Quantifying nonhomogeneous colors in
agricultural materials part I: method development. Journal of Food
Science, 73, S431eS437.
Balaban, M. O., Aparicio, J., Zotarelli, M., & Sims, C. (2008).
Quantifying nonhomogeneous colors in agricultural materials. Part
II: comparison of machine vision and sensory panel evaluations.
Journal of Food Science, 73, S438eS442.
Balaban, M. O., & Odabasi, A. Z. (2006). Measuring color with
machine vision. Food Technology, 60, 32e36.
Barnard, K., Cardei, V., & Funt, B. (2002). A comparison of
computational color constancy algorithms e part I: methodology
and experiments with synthesized data. IEEE Transactions on
Image Processing, 11, 972e984.
Barnard, K., Martin, L., Coath, A., & Funt, B. (2002). A comparison of
computational color constancy algorithms e part II: experiments
with image data. IEEE Transactions on Image Processing, 11,
985e996.
Bayarri, S., Calvo, C., Costell, E., & Duran, L. (2001). Inuence of
color on perception of sweetness and fruit avor of fruit drinks.
Food Science and Technology International, 7, 399e404.
Blasco, J., Aleixos, N., & Molto, E. (2003). Machine vision system for
automatic quality grading of fruit. Biosystems Engineering, 85,
415e423.
Brosnan, T., & Sun, D.-W. (2004). Improving quality inspection of
food products by computer vision e a review. Journal of Food
Engineering, 61, 3e16.
Costa, C., Antonucci, F., Menesatti, P., Pallottino, F., Boglione, C., &
Cataudella, S. (2012). An advanced colour calibration method for
sh freshness assessment: a comparison between standard and
passive refrigeration modalities. Food and Bioprocess Technology.
Costa, C., Antonucci, F., Pallottino, F., Aguzzi, J., Sun, D. W., &
Menesatti, P. (2011). Shape analysis of agricultural products:
a review of recent research advances and potential application to
computer vision. Food and Bioprocess Technology, 4, 673e692.
Costa, C., Pallottino, F., Angelini, C., Proietti, P., Capoccioni, F.,
Aguzzi, J., et al. (2009). Colour calibration for quantitative
biological analysis: a novel automated multivariate approach.
Instrumentation Viewpoint, 8, 70e71.
18 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
Cubero, S., Aleixos, N., Molto, E., Gomez-Sanchis, J., & Blasco, J.
(2011). Advances in machine vision applications for automatic
inspection and quality evaluation of fruits and vegetables. Food
and Bioprocess Technology, 4, 487e504.
Du, C. J., & Sun, D.-W. (2004). Recent developments in the
applications of image processing techniques for food quality
evaluation. Trends in Food Science & Technology, 15, 230e249.
Du, C. J., & Sun, D.-W. (2005). Comparison of three methods for
classication of pizza topping using different colour space
transformations. Journal of Food Engineering, 68, 277e287.
Du, C. J., & Sun, D.-W. (2006). Learning techniques used in computer
vision for food quality evaluation: a review. Journal of Food
Engineering, 72(1), 39e55.
Fairchild, M. D. (2005). Color appearance models, (2nd ed.). England:
John Wiley & Sons Ltd.
Fernandez-Vazquez, R., Stinco, C. M., Melendez-Martinez, A. J.,
Heredia, F. J., & Vicario, I. M. (2011). Visual and instrumental
evaluation of orange juice color: a consumers preference study.
Journal of Sensory Studies, 26, 436e444.
Ford, A., & Roberts, A. (1998). Colour space conversions. London,
UK: Westminster University.
Forsyth, D., & Ponce, J. (2003). Computer vision: A modern approach.
New Jersey: Prentice Hall.
Gerrard, D. E., Gao, X., & Tan, J. (1996). Beef marbling and color
score determination by image processing. Journal of Food Science,
61, 145e148.
Gormley, T. R. (1992). A note on consumer preference of smoked
salmon color. Irish Journal of Agricultural and Food Research, 31,
199e202.
Gumus, B., Balaban, M. O., & Unlusayin, M. (2011). Machine vision
applications to aquatic foods: a review. Turkish Journal of Fisheries
and Aquatic Sciences, 11, 167e176.
Gunasekaran, S. (1996). Computer vision technology for food quality
assurance. Trends in Food Science & Technology, 7, 245e256.
Gurbuz, S., Kawakita, M., & Ando, H. (2010). Color calibration for
multi-camera imaging systems. In Proceeding of the 4th
International Universal Communication Symposium (IUCS 2010).
Beijing, China.
Hashim, N., Janius, R., Baranyai, L., Rahman, R., Osman, A., &
Zude, M. Kinetic model for colour changes in bananas during the
appearance of chilling injury symptoms. Food and Bioprocess
Technology, in press.
Hunt, R. W. G. (1995). The reproduction of colour, (5th ed.). England:
Fountain Press.
Hunt, R. W. G. (1998). Measuring colour. England: Fountain Press.
Hurlbert, A. (2007). Colour constancy. Current Biology, 17, R906eR907.
Hutchings, J. B. (1994). Food colour and appearance. Glasgow, UK:
Blackie Academic & Professional.
Hutchings, J. B. (1999). Food color and appearance. Gaithersburg,
Md: Aspen Publishers.
Ilie, A., & Welch, G. (2005). Ensuring color consistency across
multiple cameras. In Proceedings of the tenth IEEE international
conference on computer vision (ICCV-05).
Iqbal, A., Valous, N. A., Mendoza, F., Sun, D.-W., & Allen, P. (2010).
Classication of pre-sliced pork and Turkey ham qualities based
on image colour and textural features and their relationships with
consumer responses. Meat Science, 84, 455e465.
Jackman, P., Sun, D.-W., & Allen, P. (2011). Recent advances in the
use of computer vision technology in the quality assessment of
fresh meats. Trends in Food Science & Technology, 22, 185e197.
Kays, S. J. (1991). Postharvest physiology of perishable plant products.
New York: Van Nostrand Reinholt.
Kays, S. J. (1999). Preharvest factors affecting appearance. Postharvest
Biology and Technology, 15, 233e247.
Kazlauciunas, A. (2001). Digital imaging e theory and application
part I: theory. Surface Coatings International Part B-Coatings
Transactions, 84, 1e9.
Kihc, K., Onal-Ulusoy, B., Yildirim, M., & Boyaci, I. H. (2007).
Scanner-based color measurement in L*a*b* format with articial
neural networks (ANN). European Food Research and Technology,
226, 121e126.
Konica, Minolta (1998). Precise color communication: Color control
from perception to instrumentation. Osaka: Konica Minolta
Sensing, Inc.
Krutz, G. W., Gibson, H. G., Cassens, D. L., &Zhang, M. (2000). Colour
vision in forest and wood engineering. Landwards, 55, 2e9.
Lana, M. M., Tijskens, L. M. M., & van Kooten, O. (2005). Effects of
storage temperature and fruit ripening on rmness of fresh cut
tomatoes. Postharvest Biology and Technology, 35, 87e95.
Larrain, R. E., Schaefer, D. M., & Reed, J. D. (2008). Use of digital
images to estimate CIE color coordinates of beef. Food Research
International, 41, 380e385.
Leon, K., Mery, D., Pedreschi, F., & Leon, J. (2006). Color
measurement in L*a*b* units from RGB digital images. Food
Research International, 39, 1084e1091.
Ling, Y. Z., & Hurlbert, A. (2008). Role of color memory in successive
color constancy. Journal of the Optical Society of America
A-Optics Image Science and Vision, 25, 1215e1226.
Litwiller, D. (2005). CMOs vs. CCD: maturing technologies, maturing
markets. Photonics Spectra, 39, 54e58.
Li, B., Xu, D., & Feng, S. H. (2009). Illumination estimation based on
color invariant. Chinese Journal of Electronics, 18, 431e434.
Lu, J., Tan, J., Shatadal, P., & Gerrard, D. E. (2000). Evaluation of pork
color by using computer vision. Meat Science, 56, 57e60.
Maguire, K. (1994). Perceptions of meat and food: some implications
for health promotion strategies. British Food Journal, 96, 11e17.
Mancini, R. A., & Hunt, M. C. (2005). Current research in meat color.
Meat Science, 71, 100e121.
Martin, M. L. G. M., Ji, W., Luo, R., Hutchings, J., & Heredia, F. J.
(2007). Measuring colour appearance of red wines. Food Quality
and Preference, 18, 862e871.
Mathworks (2012). Matlab users guide. Natick, MA: The MathWorks,
Inc.
McCaig, T. N. (2002). Extending the use of visible/near-infrared
reectance spectrophotometers to measure colour of food and
agricultural products. Food Research International, 35,
731e736.
Melendez-Martinez, A. J., Vicario, I. M., & Heredia, F. J. (2005).
Instrumental measurement of orange juice colour: a review.
Journal of the Science of Food and Agriculture, 85, 894e901.
Mendoza, F., & Aguilera, J. M. (2004). Application of image analysis
for classication of ripening bananas. Journal of Food Science, 69,
E471eE477.
Mendoza, F., Dejmek, P., & Aguilera, J. M. (2006). Calibrated color
measurements of agricultural foods using image analysis.
Postharvest Biology and Technology, 41, 285e295.
Menesatti, P., Angelini, C., Pallottino, F., Antonucci, F., Aguzzi, J., &
Costa, C. (2012). RGB color calibration for quantitative image
analysis: the 3D thin-plate spline warping approach. Sensors,
12, 7063e7079.
Misimi, E., Mathiassen, J. R., & Erikson, U. (2007). Computer vision-
based sorting of Atlantic salmon (Salmo salar) llets according to
their color level. Journal of Food Science, 72, S30eS35.
OSullivan, M. G., Byrne, D. V., Martens, H., Gidskehaug, L. H.,
Andersen, H. J., & Martens, M. (2003). Evaluation of pork colour:
prediction of visual sensory quality of meat from instrumental and
computer vision methods of colour analysis. Meat Science, 65,
909e918.
Oliveira, A. C. M., & Balaban, M. O. (2006). Comparison of
a colorimeter with a machine vision system in measuring color of
Gulf of Mexico sturgeon llets. Applied Engineering in
Agriculture, 22, 583e587.
Pallottino, F., Menesatti, P., Costa, C., Paglia, G., De Salvador, F. R., &
Lolletti, D. (2010). Image analysis techniques for automated
19 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20
hazelnut peeling determination. Food and Bioprocess Technology,
3, 155e159.
Pascale, D. (2003). A review of RGB color spaces. Montreal: The
Babel Color Company.
Paschos, G. (2001). Perceptually uniform color spaces for color
texture analysis: an empirical evaluation. IEEE Transactions on
Image Processing, 10, 932e937.
Pedreschi, F., Bunger, A., Skurtys, O., Allen, P., & Rojas, X. (2012).
Grading of potato chips according to their sensory quality
determined by color. Food and Bioprocess Technology, 5,
2401e2408.
Pedreschi, F., Leon, J., Mery, D., & Moyano, P. (2006). Development of
a computer vision system to measure the color of potato chips.
Food Research International, 39, 1092e1098.
Pedreschi, F., Mery, D., Bunger, A., & Yanez, V. (2011). Computer
vision classication of potato chips by color. Journal of Food
Process Engineering, 34, 1714e1728.
Prasad, S., & Roy, B. (2003). Digital photography in medicine. Journal
of Postgraduate Medicine, 49, 332e336.
Qin, J. W. (2010). Hyperspectral imaging instruments. In D.-W. Sun
(Ed.), Hyperspectral imaging for food quality analysis and control
(1st ed.). (pp. 159e172) San Diego, California, USA: Academic
Press/Elsevier.
Quevedo, R. A., Aguilera, J. M., & Pedreschi, F. (2010). Color of
salmon llets by computer vision and sensory panel. Food and
Bioprocess Technology, 3, 637e643.
Rocha, A. M. C. N., & Morais, A. M. M. B. (2003). Shelf life of
minimally processed apple (cv. Jonagored) determined by colour
changes. Food Control, 14, 13e20.
Rossel, R. A. V., Minasny, B., Roudier, P., & McBratney, A. B. (2006).
Colour space models for soil science. Geoderma, 133, 320e337.
Russ, J. C. (1999). Image processing handbook. Boca Raton: CRC
Press.
Scanlon, M. G., Roller, R., Mazza, G., & Pritchard, M. K. (1994).
Computerized video image-analysis to quantify color of potato
chips. American Potato Journal, 71, 717e733.
Segnini, S., Dejmek, P., & Oste, R. (1999a). A low cost video
technique for colour measurement of potato chips. Food Science
and Technology e Lebensmittel-Wissenschaft & Technologie, 32,
216e222.
Segnini, S., Dejmek, P., & Oste, R. (1999b). Relationship between
instrumental and sensory analysis of texture and color of potato
chips. Journal of Texture Studies, 30, 677e690.
Shi, L. L., Xiong, W. H., & Funt, B. (2011). Illumination estimation via
thin-plate spline interpolation. Journal of the Optical Society of
America A-Optics Image Science and Vision, 28, 940e948.
Sun, D.-W. (2000). Inspecting pizza topping percentage and
distribution by a computer vision method. Journal of Food
Engineering, 44(4), 245e249.
Sun, D.-W., & Brosnan, T. (2003). Pizza quality evaluation using
computer vision e part 1 e Pizza base and sauce spread. Journal
of Food Engineering, 57(1), 81e89.
Sun, F. X., Chang, Y. W., Zhou, Z. M., & Yu, Y. F. (2004).
Determination of beer color using image analysis. Journal of the
American Society of Brewing Chemists, 62, 163e167.
Sun, X., Chen, K., Berg, E. P., & Magolski, J. D. (2011). Predicting fresh
beef color grade using machine vision imaging and support vector
machine (SVM) analysis. Journal of Animal and Veterinary
Advances, 10, 1504e1511.
Tan, J. L. (2004). Meat quality evaluation by computer vision. Journal
of Food Engineering, 61, 27e35.
Tan, J., Gao, X., & Gerrard, D. E. (1999). Application of fuzzy sets and
neural networks in sensory analysis. Journal of Sensory Studies, 14,
119e138.
Tan, F. J., Morgan, M. T., Ludas, L. I., Forrest, J. C., & Gerrard, D. E.
(2000). Assessment of fresh pork color with color machine vision.
Journal of Animal Science, 78, 3078e3085.
The science of color. (1973). Washington: Committee on Colorimetry,
Optical Society of America.
Unklesbay, K., Unklesbay, N., & Keller, J. (1986). Determination of
internal color of beef ribeye steaks using digital image-analysis.
Food Microstructure, 5, 227e231.
Van Poucke, S., Haeghen, Y. V., Vissers, K., Meert, T., & Jorens, P.
(2010). Automatic colorimetric calibration of human wounds.
BMC Medical Imaging, 10, 7.
Wu, D., Chen, X. J., Shi, P. Y., Wang, S. H., Feng, F. Q., & He, Y.
(2009). Determination of alpha-linolenic acid and linoleic acid in
edible oils using near-infrared spectroscopy improved by wavelet
transform and uninformative variable elimination. Analytica
Chimica Acta, 634, 166e171.
Wu, D., He, Y., & Feng, S. (2008). Short-wave near-infrared
spectroscopy analysis of major compounds in milk powder and
wavelength assignment. Analytica Chimica Acta, 610,
232e242.
Wu, D., He, Y., Nie, P. C., Cao, F., & Bao, Y. D. (2010). Hybrid
variable selection in visible and near-infrared spectral analysis for
non-invasive quality determination of grape juice. Analytica
Chimica Acta, 659, 229e237.
Yagiz, Y., Balaban, M. O., Kristinsson, H. G., Welt, B. A., &
Marshall, M. R. (2009). Comparison of Minolta colorimeter and
machine vision system in measuring colour of irradiated Atlantic
salmon. Journal of the Science of Food and Agriculture, 89,
728e730.
Zapotoczny, P., & Majewska, K. (2010). A comparative analysis of
colour measurements of the seed coat and endosperm of wheat
kernels performed by various techniques. International Journal of
Food Properties, 13, 75e89.
Zheng, C. X., Sun, D.-W., & Zheng, L. Y. (2006a). Recent applications
of image texture for evaluation of food qualities e a review. Trends
in Food Science & Technology, 17(3), 113e128.
Zheng, C. X., Sun, D.-W., & Zheng, L. Y. (2006b). Recent
developments and applications of image features for food quality
evaluation and inspection e a review. Trends in Food Science &
Technology, 17(12), 642e655.
20 D. Wu, D.-W. Sun / Trends in Food Science & Technology 29 (2013) 5e20

S-ar putea să vă placă și