Sunteți pe pagina 1din 29

RAW, TIFF n Stuff

Understanding Photo Formats


7th March 2012
Janine Scott

Creating an Image
Its easy to create an image

Strong light source


Pin hole
Target surface
= upside down picture
The challenge is in recording the image

How it used to be..


Starting with Basics

Prior to the digital age most images were


recorded on photographic film and used the
response of silver halide crystals to light as a
recording medium

Sensor Arrays
A digital camera uses a sensor array of
millions of tiny pixels in order to produce
the final image. When you press your
camera's shutter button and the exposure
begins, each of these pixels has a
"photosite" which is uncovered to collect
and store photons in a cavity. Once the
exposure finishes, the camera closes each
of these photosites, and then tries to
assess how many photons fell into each.

Sensor Arrays
The relative quantity of photons in each
cavity are then sorted into various intensity
levels, whose precision is determined
by bit depth (0 - 256 for an 8-bit image)

Which means exactly nothing to most of us!

Sensor Arrays
A CCD (Charge
couple device)

A CMOS (Complementary
metaloxidesemiconductor)
/APS (active pixel sensor)
device

More Pixels = Better images?


NO
if there is room for them more pixels might create a
better image, but resolution & image quality are not
solely dependent on the number of pixels and too many
can give poorer images
Another area of importance is what Bit Depth your
processor is

Imagine setting out hundreds and thousands of identical


yogurt pots, a watching the rain fill them up
Some round the edges of the area wont have any water in
them, some wont have much, others might have lots
because rain doesnt fall evenly
Now you have to tell you friend
if you have only one bit of equipment available e.g. some
1 or empty 0 you have a 2 bit depth
This will give you a black and white image

Each cavity is unable to distinguish how much of each


color has fallen in, so the above illustration would only be
able to create grayscale images. To capture color
images, each cavity has to have a filter placed over it
which only allows penetration of a particular color of
light. Virtually all current digital cameras can only capture
one of the three primary colors in each cavity, and so
they discard roughly 2/3 of the incoming light. As a
result, the camera has to approximate the other two
primary colors in order to have information about all
three colors at every pixel. The most common type of
color filter array is called a "Bayer array,

Now we are digital


Most digital cameras use a digital sensor
called a BAYER PATTERN sensor instead
of film.
A typical sensor might
look like this - with an
arrangement of red,
blue and green light
sensitive areas

BAYER PATTERN sensor


Each pixel in the sensor
responds to either red, green
or blue light and there are 2
green sensitive pixels for
each red and blue pixel.
There are more green pixels
because the eye is more
sensitive to green, so the
green channel is the most
important.

BAYER PATTERN sensor


The sensor measures the intensity of light falling
on it. The filter give sensitivity to color, but each
color isnt equally represented because the
human eye is more sensitive to green light than
both red and blue light. Having more green
pixels produces an image which appears less
noisy and has finer detail than can be achieved if
each color is treated equally. This also explains
why there is much less noise in green colored
parts of a image than in the other colors

Brightness / Hue & Sat


The high proportion of green takes advantage of
properties of the human visual system, which determines
brightness mostly from green and is far more sensitive to
brightness than to hue or saturation. Sometimes a 4color filter pattern is used, often involving two different
hues of green. This provides potentially more accurate
color, but requires a slightly more complicated
interpolation process.
The color intensity values not captured for each pixel can
be interpolated (or guessed) from the values of adjacent
pixels which represent the color being calculated.

real world / cameras world


The original scene

How the camera


sees it yukk!

So
The green pixels measure the green light,
the red the red and the blue the blue. The
readout from the sensor is of the form
color / intensity for each individual pixel,
where color can be red, green or blue and
intensity runs from 0 to 256 for an 8 bit
sensor or 0 to 4095 (for a 12-bit sensor)

Creating images from colour


intensity
A conventional digital image has pixels which can be red,
green, blue of any one of millions of other colors, so to
generate such an image from the data output by the
sensor, a significant amount of signal processing is
required. This processing is called Bayer interpolation
because it must interpolate (i.e. calculate) what the color
of each pixel should be. The color and intensity of each
pixel is calculated based on the relative strengths of the
red, green and blue channel data from all the
neighboring pixels. Each pixel in the converted image
now has three parameters: red:intensity, blue:intensity
and green:intensity.

Calculating an image
In the end the calculated image looks
something like this:

ISO (speed)
RAW data is .. raw data!

It is the output from each of the original red, green


and blue sensitive pixels of the image sensor, after
being read out of the array by the array electronics
and passing through an analog to digital converter.
The readout electronics collect and amplify the sensor
data and it's at this point that "ISO" (relative sensor
speed) is set. If readout is done with little
amplification, that corresponds to a low ISO (say ISO
100), while if the data is read out with a lot of
amplification, that corresponds to a high ISO setting
(say ISO 3200).

Storage

Now one of two things can be done with


the RAW (raw) data. It can be stored as it
is, it can be compressed slightly to
become a TIFF or it can be further
processed to yield a JPEG image. The
diagram below shows the processes
involved:

JPEG
If the data is stored as a JPEG file, it goes through the
Bayer interpolation, is modified by in camera set
parameters such as white balance, saturation,
sharpness, contrast etc, is subject to JPEG compression
and then stored. The advantage of saving JPEG data is
that the file size is smaller and the file can be directly
read by many programs or even sent directly to a printer.
The disadvantage is that there is a quality loss, the
amount of loss depending on how much compression is
used. The more compression, the smaller the file but the
lower the image quality. Lightly compressed JPEG files
can save a significant amount of space and lose very
little quality.

RAW to JPEG or TIFF conversion


If you save the RAW data, you can then
convert it to a viewable JPEG or TIFF file
at a later time on a PC. The process is
shown in the diagram below:

You'll see this is pretty similar to the first


diagram, except now you're doing all the
processing on a PC rather than in the camera.
Since it's on a PC you can now pick whatever
white balance, contrast, saturation, sharpness
etc. you want. So here's the first advantage of
saving RAW data. You can change many of the
shooting parameters AFTER exposure. You can't
change the exposure (obviously) and you can't
change the ISO, but you can change many other
parameters.

RAW to TIFF
A second advantage of shooting a RAW file is
that you can also perform the conversion to an
8-bit or 16-bit TIFF file. TIFF files are larger than
JPEG files, but they retain the full quality of the
image. They can be compressed or
uncompressed, but the compression scheme is
lossless, meaning that although the file gets a
little smaller, no information is lost. This is a
tricky concept for some people, but here's a
simple example of lossless compression. Take
this string of digits:
14745296533333659762888888356789

TIFF lossless compression


Is there a way to store this that doesn't lose any
digits, but takes less space? The answer is yes.
One way would be as follows
1474529653[5]6597628[6]356789
Here the string 33333 has been replaced by 3[5]
- meaning a string of 5 3s, and the string 888888
has been replaced by 8[6] - meaning a string of 6
8s. You've stored the same exact data, but the
"compressed" version takes up less space. This
is similar (but not identical) to the way lossless
TIFF compression is done.

I said above that the data could be stored as an


8 or 16-bit TIFF file. RAW data from most high
end digital camera contains 12 bit data, which
means that there can be 4096 different intensity
levels for each pixel. In an 8-bit file (such as a
JPEG), each pixel can have one of 256 different
intensity levels. Actually 256 levels is enough,
and all printing is done at the 8 bit level, so you
might ask what the point is of having 12 bit data.

The answer is that it allows you to perform a


greater range of manipulation to the image without
degrading the quality. You can adjust curves and
levels to a greater extent, then convert back to 8bit data for printing. If you want to access all 12
bits of the original RAW file, you can convert to a
16-bit TIFF file. Why not a 12-bit TIFF file?
Because there's no such thing! Actually what you
do is put the 12 bit data in a 16 bit container. It's a
bit like putting a quart of liquid in a gallon jug, you
get to keep all the liquid but you have some free
space. Putting the 12 bit data in a 8 bit file is like
po

If you want to access all 12 bits of the original


RAW file, you can convert to a 16-bit TIFF file.
Why not a 12-bit TIFF file? Because there's no
such thing! Actually what you do is put the 12 bit
data in a 16 bit container. It's a bit like putting a
quart of liquid in a gallon jug, you get to keep all
the liquid but you have some free space. Putting
the 12 bit data in a 8 bit file is like pouring that
quart of liquid into a pint container. It won't all fit
so you have to throw some away.

S-ar putea să vă placă și