Sunteți pe pagina 1din 242

Contents

Optical process in semiconductors


Emission types
Luminescence
Light emitting diodes (LEDs)
LASERs
Types of optical sources
There are three main types of optical light
source are available:
 Wide band sources (incandescent lamps)
 Incoherent sources (LEDs)
 Coherent sources (LASERs)
Light emission in semiconductors
pn-junction under zero bias pn-junction under forward bias
Barrier
potential CB
Electrons
CB
Electrons
Radiation
Eg

Holes Holes

p-type n-type p-type n-type

Wavelength of hc 1244
0   nm
emitted light Eg Eg
Light emission in semiconductors

E CB CB

Eg Eg

VB VB

Indirect band gap Direct band gap


K
Emission Characteristics of
various semiconductors

1110
1850
Heterojunction semiconductor
light sources
P-type n-type P-type n-type
GaAlAs GaAlAs GaAs

1.7 eV 1.7 eV
GaAs Electrons

1.42 eV
Holes

Single heterostructure (SH)


Heterojunction semiconductor
light sources
n-GaAs N-type
P-type
AlGaAs
AlGaAs
Electrons

1.72 eV
1.42 eV
1.72 eV

Holes

n=3.45 n=3.59 n=3.45

Double heterostructure (DH)


The light emitting diodes
Drawbacks of LEDs:
 Lower optical power (microwatts)

 Lower modulation bandwidth

 Harmonic distortion
The light emitting diodes
Merits of LEDS:
 Simpler fabrication

 Low cost

 Reliability

 Generally less temperature dependence

 Simpler drive circuitry

 Linearity
LED structures

p-type
epitaxial Light output
layer
n-type

n-type substrate

p-type Ohmic contact

Planar LED Ohmic


Dome LED contacts
Surface emitting LED
Fiber

Epoxy resin
Contact
N-type substrate

N Double
p heterojunction
P
SiO2
Gold heatsink
Edge emitting LED

Contact
SiO2
N
n
p
P-type
substrate

Gold heatsink
LASER light sources

 What is LASER?
 Emission processes
 How laser oscillates
The LASERs
Light Amplification by Stimulated
Emission of Radiation (LASER)
Type of lasers

 Solid state lasers


 Semiconductor lasers
 Gas lasers
 Dye lasers
Absorption and Emission of
Radiation
Initial state Final state

Absorption

Spontaneous
emission

Stimulated
emission
Basic construction of Laser
Photon
Mirror with Mirror with
multiplication
100% partially
reflective reflective
Amplified
light !!
Gain medium

Initial
Laser pump
state
Energy to create non
equilibrium state
Photodiodes
 What is photodiode (PD)?
 Photodiode types
 Optical detection principles
 Absorption coefficient
 Quantum efficiency
 PD structures
What is photodide?
A photodiode is a semiconductor device that converts light
into current. The current is generated when photons are
absorbed in the photodiode.

A small amount of current is also produced when no light is


present. Photodiodes may contain optical filters, built-in
lenses, and may have large or small surface areas.
Photodiode types

Photodiode

Photomulti- Vacuum
plier Photo- pn-PD P-i-N PD APD
tubes diodes

PD used in OFC
V-I characteristics of PD
I
Photovoltaic
Photoconductive
mode
mode
Region 2 Region 1
V

Increasing
optical
Region 3 power
Photodetection principles

- p n +
hf >Eg
Eg
Absorption coefficient and
Quantum efficiency
Absorption coefficient is a measure of how good the material is
for absorbing light of a certain wavelength.
The quantum efficiency n is defined as the fraction of incident
photons which are absorbed by the photodiode and generated
electrons which are collected at the diode terminals.

n = Number of electrons collected/ Number of incident photons

re
 rp: Incident photon rate
re: Corresponding electron rate
rp
p-n photodiodes
hf E-Field

Depletion
region
Absorption
region
n
Diffusion region

Load
x
Output Ch. of a typical p-n
photodiodes
Current A High light level

800

600

400
Low light level
200
Dark current (no light)

Reverse bias (V)


10 20 30 40
Television

Reference Books:
 Monochrome and Color Television

By R R Gulati
 Basic Television- Principles and servicing
By Bernard Grob
Television
Contents
 Introduction
 Application
 Elements of a Television System
 Television Broadcasting channels

2
Introduction
 Development of Television
 Application of Television
 Equipment
 Coverage
 Recent Trends

3
Development of Television
Television means “to see from a distance”
 The first demonstration of actual television was given by J. L.
Baird in UK and C.F. Jenkins in USA around 1927 using the
technique of mechanical scanning using rotating discs
 Real breakthrough occurred with the invention of Cathode Ray
Tube (CRT) and first camera tube based on storage principle (V.K.
Zworykin of USA)
 By 1930 electromagnetic scanning of both CRT and camera were
developed with other ancillary circuits: beam deflection, video
amplification, etc.
 Television broadcast was started in 1935 but its progress was
slowed down due to Second World War

4
Television System
 Initially due to the absence of any international standard three
monochrome systems: 525 line American, 625 line European, and 819
line French grew independently.
 Later initiatives have been taken for establishing a common 625 line
system. However, due to huge involvement to change equipment and
millions of Rx already in use for all the three systems.
 Three different standards of monochrome television have resulted in the
development of three different systems color television.
 In 1953 USA adopted on the recommendation of its National Television
Systems Committee and hence called NTSC system.
 The other two color systems: PAL and SECAM are later modifications of
the NTSC system to confirm the other two monochrome standards.
 Regular color transmission started in USA in 1954.
 In 1960 Japan adopted NTSC system followed by Canada and other
several countries
Television System
 The PAL color system compatible with 625 line monochrome system
developed by Telefunken Laboratory of Federal Republic of
Germany (FRG).
 PAL system reduces the color display errors that occurred in NTSC
system.
 PAL system adopted by FGR and UK in 1967, and subsequently
Australia, Spain, Iran, India, Bangladesh, and several other west and
south Asia countries.
 The third color system in use is the SECAM system. This was
initially developed and adopted in France in 1967. Later versions
known as SECAM IV and SECAM V were developed in Russian
National Institute of Research (NIR) and sometimes referred to as
NIR-SECAM system.
 This system adopted by USSR, Germany Democratic Republic,
Hungary, some other East European countries, Algeria.
 The adaptation of a particular color system depends on the
monochrome system of the respective countries

6
Application of Television
Impact of television is far and wide, and has opened new avenues in
diverse fields like
 public entertainment, Newscasts and weather reports,
 political organization and campaigns,
 announcements and guidance at public places like airport
terminals, sales promotion and many others.
Closed Circuit Television (CCTV) is a special application
 Group demonstrations of surgical operations or scientific
experiments, inspection of noxious or dangerous industrial or
scientific processes (e.g. nuclear fuel processing) or of underwater
operations and surveillance of areas for security purposes are some
typical examples.
 A special type of CCTV is what might be called wired community
TV.
 Another potential use of CCTV that can become popular and is
already technically feasible is a video-telephone or ‘visiphone’.
7
Equipment
Television broadcasting requires
 Extensive lighting facilities, cameras, microphones, and
control equipment for television studios.
 Transmitting equipment for modulation, amplification and
radiation of the signals at the high frequencies.
 A wide variety of support equipment essential in broadcast
studios and control rooms.
 Besides the above video tape recorders, telecine machines,
special effects equipment plus all the apparatus for high
quality sound broadcast.

8
Coverage
 Microwave based relay station. A matrix of such relay stations can be
used to provide complete national coverage.

VHF bands of 41 to 68 Mhz and 174 to 230 MHz. and UHF band between
470 and 890 MHz. This usually varies between 75 and 140 km depending
on the topography and radiated power.

 Global coverage by linking national TV systems through satellites.


Area of TV broadcast coverage can be extended further by coaxial
cables.

 Another method is to employ somewhat higher transmitter power on


the satellite and receive the down transmissions directly through
larger dish antenna on conventional television receivers fitted with an
extra front-end converter.

9
Recent Trends
 Conventional TV broadcasting technologies have been
replaced by solid state technology

 Digital TV broadcasting

 Development of advanced display technology

10
What a Television Broadcasting Is?

 Television means “to see at a distance”.


 In our practical television system, the visual information in the scene
is converted to an electric video signal for transmission to the
receiver.
 Then the image is reassembled on the fluorescent screen of the
picture tube.
 In monochrome television, the picture is reproduced as shades of
white, gray and black.
 In color television, the main parts of the picture are reproduced in all
their natural colors as combination of red, green, and blue.
Monochrome Television Transmitter
Cross-sectional View of a Vidicon TV Camera Tube

 Two simultaneous motions of the beam, one from left to right and the other from
top to bottom encounters a different resistance across the target plate.
 Depending on the resistance of the photoconductive coating the current passes
through a load resistance RL, which is connected to the conductive coating on
one side and to a dc supply source on the other.
 Depending on the magnitude of the current a varying voltage appears across the
resistance RL and this corresponds to the optical information of the picture.
 The electrical information obtained from the TV camera tube is video signal.
Monochrome Television Receiver

The receiver is of the heterodyne type and employs two or three


stages of intermediate frequency (IF) amplification. The output from
the last IF stage is demodulated to recover the video signal.

14
Picture Reception

The signal that carries the picture information is amplified and coupled to
the picture tube which converts the electrical signal back into picture
elements of the same degree of black and white.
Picture Reception

 The picture tube is very similar to the CRT used in an oscilloscope.


 The glass envelope contains an electron gun structure that
produces a beam of electrons aimed at the fluorescent screen.
 When the electron beam strikes the screen, light is emitted.
 The beam is deflected by a pair of deflecting coils mounted on the
neck of the picture tube in the same way and rate as the beam
scans the target in the camera tube.
 The amplitudes of the currents in the horizontal and vertical
deflecting coils are so adjusted that the entire screen, called raster,
gets illuminated because of the fast rate of scanning.
Picture Reception (contd.)
 The video signal is fed to the grid or cathode of the picture tube.
When the varying signal voltage makes the control grid less negative,
the beam current is increased, making the spot of light on the screen
brighter.
 More negative grid voltage reduces the brightness. If the grid
voltages is negative enough to cut-off the electron beam current at
the picture tube there will be no light. This state corresponds to black.
 Thus the video signal illuminates the fluorescent screen from white to
black through various shades of grey depending on its amplitude at
any instant.
 This corresponds to the brightness changes encountered by the
electron beam of the camera tube while scanning the picture details
element by element.
 The rate at which the spot of light moves is so fast that the eye is
unable to follow it and so a complete picture is seen because of the
storage capability of the human eye.
Sound Reception

 The path of the sound signal is common with the picture signal from
antenna to the video detector section of the receiver.
 Here the two signals are separated and fed to their respective
channels.
 The frequency modulated audio signal is demodulated after at least
one stage of amplification.
 The audio output from the FM detector is given due amplification
before feeding it to the loudspeaker.
Synchronization

 To ensure perfect synchronization between the scene being


televised and the picture produced on the raster, synchronizing
pulses are transmitted during the retrace. (i.e., fly-back intervals of
horizontal and vertical motions of the camera scanning beam.)
 In addition to carrying picture detail, the radiated signal at the
transmitter also contains synchronizing pulses.
 These pulses which are distinct for horizontal and vertical motion
control, are processed at the receiver
 And fed to the picture tube sweep circuitry thus ensuring that the
receiver picture tube beam is in step with the transmitter camera
tube beam.
Television Broadcast Channels

The band of frequencies assigned for transmission of the and sound


signals is a television channel. FCC assigned 6 MHz for a channel

Frequency Range Channel No. Frequency Band (MHz)


1 Not used
Low band VHF 2-4 54-60, 60-66, 66-72,

72-76 MHz Air Navigation 72-76


Low band VHF 5,6 76-82, 82-88
88-108 FM band 88-108
High band VHF 7,8,9,10,11,12,13 174-180,180-186,186-
192,192-198,198-204,
204-210,210-216
UHF 14-83 470-890
The Television Picture
Contents
 Picture Elements
 Horizontal and Vertical Scanning
 Motion Pictures
 Frame and field frequencies
 Horizontal and vertical frequencies
 Horizontal and vertical synchronization
 Horizontal vertical blanking
 The 3.58 MHz color signal
 The 6 MHz television broadcast channel

1
Picture Elements
A still picture is fundamentally an arrangement of many small dark
and light areas. Each small area of light or shade is a picture element
or picture details. All the elements contain the visual information
in the scene. If they are transmitted and reproduce in the same degree
of light or shade as the original and proper position, the picture will
be reproduce.
Reproducing a picture by
Duplicating its picture elements

A Still Picture Magnified view to show


Picture elements
Horizontal and vertical scanning
The sequence for scanning all the picture elements is as follows:
 The electron beam sweeps across one horizontal line, covering all the picture
elements in that line
 At the end of each line, the beam is returned very quickly to the left side to
begin scanning the next horizontal line.

The return time is called retrace & it is the very shortest span of time. No
picture information is scanned during retrace because both the camera tube and
picture tube are blanked out for this period.

 When the beam is returned to the left side, its vertical position is lowered so
that the beam will scan the next lower line and not repeat over the same line.
This is accomplished by the vertical scanning motion of the beam.

3
Typical H-scanning pattern
Typical V-scanning pattern

5
Number of scanning lines per frame

Maximum number of alternate light and


dark elements (lines) which can be
resolved by the eye is given by

1
Nv 
= minimum resolving angle of the eye expressed in radians

= viewing distance /picture height =D/H
Experimentally it is found that D/H=4
=one minute=1/60 degree
N v  1   1 ( 180 1 60)  4  860
6
Number of scanning lines per frame
The effective number of lines N r  N v k  860  0.7  602

The values of k lies in between 0.65 to 0.75.

Frame per second


The vertical scanning is at the rate of 30 Hz for the frame frequency of
30 frames per second. The frame rate of 30 per second means that 625
lines for one complete frame are scanned in 1/30 second.

Standard commercial motion picture practice 25 frames per second.

Persistence of vision
The impression made by light seen by the eye persists for a small fraction of a
second after the light source is removed.

7
Fliker and elimination of fliker
 The time difference between display frame & upcoming frame is
called FLICKER.During this time no picture information is scanned.So
screen is dark/blank.

 If it is possible to project each frame twice in screen the flicker can


be eliminated.

8
Interlaced Scanning(IS)
To achieve IS the horizontal
sweep oscillator is made to
work at a frequency of 15625
Hz to scan the lines per
frame

In all then, the beam scans 625 lines


per frame at the same rate of 15625
lines per second. Therefore, with
interlaced scanning the flicker effect
is eliminated without increasing the
speed of scanning

9
What a Television Broadcasting Is?
Monochrome Television Transmitter
Simplified Cross-sectional View of a
Vidicon TV Camera Tube

 Two simultaneous motions of the beam, one from left to right and the other from
top to bottom encounters a different resistance across the target plate.
 Depending on the resistance of the photoconductive coating the current passes
through a load resistance RL, which is connected to the conductive coating on
one side and to a dc supply source on the other.
 Depending on the magnitude of the current a varying voltage appears across the
resistance RL and this corresponds to the optical information of the picture.
 The electrical information obtained from the TV camera tube is video signal.
Monochrome Television Receiver

The receiver is of the heterodyne type and employs two or three


stages of intermediate frequency (IF) amplification. The output from
the last IF stage is demodulated to recover the video signal.

13
Picture Reception

The signal that carries the picture information is amplified and coupled to the
picture tube which converts the electrical signal back into picture elements of the
same degree of black and white.
Picture Reception

The picture tube is very similar to the CRT used in an oscilloscope. The glass
envelope contains an electron gun structure that produces a beam of electrons
aimed at the fluorescent screen. When the electron beam strikes the screen, light
is emitted. The beam is deflected by a pair of deflecting coils mounted on the
neck of the picture tube in the same way and rate as the beam scans the target
in the camera tube. The amplitudes of the currents in the horizontal and vertical
deflecting coils are so adjusted that the entire screen, called raster, gets
illuminated because of the fast rate of scanning.

The video signal is fed to the grid or cathode of the picture tube. When the varying
signal voltage makes the control grid less negative, the beam current is increased,
making the spot of light on the screen brighter. More negative grid voltage reduces
the brightness. If the grid voltages is negative enough to cut-off the electron beam
current at the picture tube there will be no light. This state corresponds to black.
Thus the video signal illuminates the fluorescent screen from white to black through
various shades of grey depending on its amplitude at any instant. This corresponds
to the brightness changes encountered by the electron beam of the camera tube
while scanning the picture details element by element. The rate at which the spot
of light moves is so fast that the eye is unable to follow it and so a complete picture
is seen because of the storage capability of the human eye.
Sound Reception

The path of the sound signal is common with the picture signal from antenna
to the video detector section of the receiver. Here the two signals are
separated and fed to their respective channels. The frequency modulated
audio signal is demodulated after at least one stage of amplification. The
audio output from the FM detector is given due amplification before feeding
it to the loudspeaker.
Synchronization
To ensure perfect synchronization between the scene being televised and the
picture produced on the raster, synchronizing pulses are transmitted during
the retrace, i.e., fly-back intervals of horizontal and vertical motions of the
camera scanning beam. Thus, in addition to carrying picture detail, the
radiated signal at the transmitter also contains synchronizing pulses. These
pulses which are distinct for horizontal and vertical motion control, are
processed at the receiver and fed to the picture tube sweep circuitry thus
ensuring that the receiver picture tube beam is in step with the transmitter
camera tube beam.

8/10/2005 PhD Defense, Anisur Rahman 17


Television Broadcast Channels

The band of frequencies assigned for transmission of the and sound


signals is a television channel. FCC assigned 6 MHz for a channel

Frequency Range Channel No. Frequency Band (MHz)


1 Not used
Low band VHF 2-4 54-60, 60-66, 66-72,

72-76 MHz Air Navigation 72-76


Low band VHF 5,6 76-82, 82-88
88-108 FM band 88-108
High band VHF 7,8,9,10,11,12,13 174-180,180-186,186-
192,192-198,198-204,
204-210,210-216
UHF 14-83 470-890

18
Results

S D
Signal Transmission and Channel Bandwidth

Contents
 Modulation
 Channel bandwidth
 Vestigial side band transmission
 Transmission efficiency
 Complete channel bandwidth

1
Vestigial Side band

In the video signal very low frequency modulating components exist along with
the rest of the signal. These components give rise to sidebands very close to the
carrier frequency which are difficult to remove by physically realizable filters. Thus
it is not possible to fully suppress one complete sideband. Any effort to
completely suppress the lower sideband would result in objectionable phase
distortion at these frequencies. This distortion will be seen by the eye as ‘smear’ in
the reproduced picture. Therefore, as a compromise, only a part of the lower
sideband, is suppressed, and the radiated signal then consists of a full upper
sideband together with the carrier, and the vestige (remaining part) of the
partially suppressed lower sideband. This pattern of transmission of the
modulated signal is known as vestigial sideband.
Transmission efficiency
The total power Pt in the modulated wave is the sum of the carrier power Pc and
the power in the two sidebands. This can be expressed as

Where Ec 2 is the r.m.s. value of the sinusoidal carrier wave, and R is the
resistance in which the power is dissipated.

At 100% modulation (m= 1) the transmitted power attains its maximum


possible value. Pt (max) = 1.5 Pc, where the power contained in the two
sidebands has a maximum value of 50% of the carrier power.

3
Complete Channel Bandwidth
TV channel
sideband
spectrum. C is
color subcarriers

UK TV channel
standard with
vestigial sideband
Complete Channel Bandwidth

American TV
channel standard
with vestigial
sideband

Sideband spectrum
of two adjacent
channels of the
lower VHF
band of television
station allocations
Ideal characteristics of a TV Tx and Rx
Transmitter output
characteristics for
vestigial sideband
signals

Desired receiver
characteristics
for correct
reproduction of
video signals

6
Demerits of Vestigial Sideband Transmission

 A small portion of the transmitter power is wasted in the vestigial sideband


filters which remove the remaining lower sideband.

 Signal to noise voltage ratio decreases about 6 db relative to what be


available if double sideband transmission is used.

 Some phase and amplitude distortion of the picture signal occurs.

 More critical tuning at the receiver is necessary because degeneration of


picture quality is less with wider lower sideband than with narrow lower
sideband.

Despite these demerits of vestigial sideband transmission it is used in all


television systems because of the large saving it effects in the bandwidth
required for each channel.

7
Frequency modulation
The sound signal is frequency
modulated because of its inherent
merits of interference-free reception.
Here the amplitude of the modulated
carrier remains constant, whereas its
frequency is varied in accordance
with variations in the modulating
signal. The variation in carrier
frequency is made proportional to
the instantaneous value of the
modulating voltage. The rate
at which this frequency variation
takes place is equal to the
modulating frequency.

8
Analysis of FM Wave
In order to understand clearly the meaning of instantaneous frequency
fi and the associated instantaneous angular velocity ωi = 2πfωi, the
equation of an ac wave in the generalized form may first be written as:

The instantaneous frequency i  d (t ) dt


A frequency modulated wave with sinusoidal modulation can now be expressed
as

9
Analysis of FM wave

The above equation can be commonly written in the form

10
FM channel bandwidth
FM channel bandwidth

where fm is the frequency of the modulating wave and n is the number of


the significant side frequency components. The value of n is determined
from the modulation index
The maximum frequency deviation of commercial FM is limited to 75 kHz,
and the modulating frequencies typically cover 25 Hz to 15 kHz.
The 625 line television standard specify that the maximum deviation (∆f)
should not exceed ± 50 kHz for the highest modulating frequency of 15
KHz. Thus the modulation index
m f  50 15  5

The resultant deviation of ± 75 kHz around the sound carrier is very much
within the guard-band edge and reasonably away from any significant video
sideband components.
Channel bandwidth for color transmission

The colour video signal does not extend beyond about 1.5 MHz. This feature
allows the narrow band chrominance (colour) signal to be multiplexed with the
wideband luminance (brightness) signal in the standard 7 MHz television
channel. This is achieved by modulating the colour signal with a carrier
frequency which lies within the normal channel bandwidth. This is called
colour subcarrier frequency and is located towards the upper edge of the video
frequencies to avoid interference with the monochrome signal. In the PAL colour
system the colour subcarrier frequency is located 4.433 MHz. The bandwidth of
colour signals is restricted to about ± 1.2 MHz around the subcarrier.
Merits of FM modulation
Frequency modulation has the following advantages over amplitude modulation.
(a) Noise Reduction:
The greatest advantage of FM is its ability to eliminate noise interference and
thus increase the signal to noise ratio. In FM, amplitude variations of the
modulating signal cause frequency deviations and not a change in the
amplitude of the carrier. Noise interference results in amplitude variations of the
carrier and thus can be easily removed by the use of amplitude limiters.
(b) Transmitter Efficiency:
The amplitude of the FM wave is independent of the depth of modulation, whereas
in AM it is dependent on this parameter. This means that low level modulation can
be used in FM and all succeeding amplifiers can be class ‘C’ which are more
efficient.
(c) Adjacent Channel Interference:
Because of the provision of a guard band in between any two TV channels, there
is less interference than in conventional AM broadcasts.
(d) Co-channel Interference:
The amplitude limiter in the FM section of the receiver works on the principle of
passing the stronger signal and eliminating the weaker. In this manner, a relatively
weak interfering signal or any pick-up from a co-channel station (a station
operating at the same carrier frequency) gets eliminated in a FM system.
Television Broadcast Channels

The band of frequencies assigned for transmission of the and sound


signals is a television channel. FCC assigned 6 MHz for a channel

Frequency Range Channel No. Frequency Band (MHz)


1 Not used
Low band VHF 2-4 54-60, 60-66, 66-72,

72-76 MHz Air Navigation 72-76


Low band VHF 5,6 76-82, 82-88
88-108 FM band 88-108
High band VHF 7,8,9,10,11,12,13 174-180,180-186,186-
192,192-198,198-204,
204-210,210-216
UHF 14-83 470-890

14
TV Camera Tubes
A TV camera tube may be called the eye of a TV system. A camera tube must
have the following performance characteristics:
(i) sensitivity to visible light,
(ii) wide dynamic range with respect to light intensity, and
(iii) ability to resolve details while viewing a multielement scene.

The optical-electrical conversion has the following limiting factors:


(i) poor sensitivity,
(ii) poor resolution,
(iii) high noise level,
(iv) undesirable spectral response,
(v) instability,
(vi) poor contrast range and
(vii) difficulties of processing.

However, during the past fifty years or so it have now been possible to develop
camera tubes which deliver output even where our eyes see complete darkness.
Spectral response has been so perfected, that pick-up outside the visible range (in
infra-red and ultraviolet regions) has become possible. Infact, now there is a tube
available for any special application.
TV Camera Tubes
Photoelectric Effects
The two photoelectric effects used for converting variations of light intensity
into electrical variations are:
(i) photoemission and (ii) photoconductivity.

Photoemission:
Certain metals emit electrons when light falls on their surface. Emitted electrons
are called photoelectrons and the emitting surface a photocathode. The number
of electrons which can overcome the potential barrier and get emitted, depends
on the light intensity. Alkali metals are used as photocathode because they have
very low work-function. Cesium-silver or bismuth-silver-cesium oxides are
preferred as photoemissive surfaces because they are sensitive to incandescent
light and have spectral response very close to the human eye.
Photoconduction:
The conductivity of the photosensitive surface varies in proportion to the
intensity of light focused on it. In general the semiconductor metals including
selnium, tellurium and lead with their oxides have this property known as
photoconductivity. The variations in resistance at each point across the surface
of the material is utilized to develop a varying signal by scanning it uniformly
with an electron beam.
16
Picture Reception

Photoemission process Photoconduction process


Typical Internal view of Image Orthicon TV Camera

This tube makes use of the high photoemissive sensitivity obtainable from
photocathodes, image multiplication at the target caused by secondary emission
and an electron multiplier.
Cross-section of Vidicon Camera Tube
The Vidicon came into general use in the early 50’s and gained immediate
popularity because of its small size and ease of operation. It is functioning on
the principle of photoconductivity

Cross-section Video Signal capturing


system
Cross-section of Plumbicon Camera Tube
This picture tube has overcome many of the less favourable features of standard
vidicon. It has fast response and produces high quality pictures at low light levels.
Its smaller size and light weight, together with low-power operating
characteristics, makes it an ideal tube for transistorized television cameras.
Silicon diode array Vidicon
This is another variation of vidicon where the
target is prepared from a thin n-type silicon
wafer instead of deposited layers on the glass
faceplate. The final result is an array of silicon
photodiodes for the target plate.

The resulting p-n photodiodes are about 8


µm in diameter. The silicon target plate
thus formed is typically 0.003 cm thick,
1.5 cm square having an array of 540 ×
540 photodiodes. This target plate is
mounted in a vidicon type of camera tube.

The vidicon employing such a multidiode silicon


target is less susceptible to damage or burns due
to excessive high lights. It also has low lag time
and high sensitivity to visible light which can be
S D
extended to the infrared region. The trade name of
this Vidicon is ‘Epicon’. Such camera tubes have
wide applications in industrial, educational and
CCTV services.
Solid state image scanner
CCD working principles

8/10/2005 PhD Defense, Anisur Rahman 28


8/10/2005 PhD Defense, Anisur Rahman 30
The operation of solid state image scanners is based on the functioning of charge
coupled devices (CCDs) which is a new concept in metal-oxide-semiconductor (MOS)
circuitry. The CCD may be thought of to be a shift register formed by a string of very
closely spaced MOS capacitors. It can store and transfer analog charge signals—
either electrons or holes—that may be introduced electrically or optically.
TV Camera Tubes
Content:
 Types of TV camera tubes
 Principle of video signal capturing
 Internal structure of TV camera
 Principle of solid state image scanner (CCD
devices)
 CCD readout techniques
TV Camera Tubes
A TV camera tube may be called the eye of a TV system. A camera tube must
have the following performance characteristics:
(i) sensitivity to visible light,
(ii) wide dynamic range with respect to light intensity, and
(iii) ability to resolve details while viewing a multielement scene.

opti cal-electrical conversion has the following limiting factors:


(i) poor sensitivity,
(ii) poor resolution,
(iii) high noise level,
(iv) undesirable spectral response,
(v) instability,
(vi) poor contrast range and
(vii) difficulties of processing.

However, during the past fifty years or so it have now been possible to develop
camera tubes which deliver output even where our eyes see complete darkness.
Spectral response has been so perfected, that pick-up outside the visible range (in
infra-red and ultraviolet regions) has become possible. Infact, now there is a tube
available for any special application.
TV Camera Tubes
Photoelectric Effects
The two photoelectric effects used for converting variations of light intensity
into electrical variations are:
(i) photoemission and (ii) photoconduction

Photoemission:
Certain metals emit electrons when light falls on their surface. Emitted electrons
are called photoelectrons and the emitting surface a photocathode.

When light falls on the metal surface , then if the light energy is greater than
the metal work function , electron emitted ; the number of electrons which can
overcome the potential barrier and get emitted, depends on the light intensity.

Alkali metals are used as photocathode because they have very low work-
function. Cesium-silver or bismuth-silver-cesium oxides are preferred as
photoemissive surfaces have this property are known as photoemission.
Photoconduction:
The conductivity of the photosensitive surface depends on the intensity of light
focused on it. In general the semiconductor metals including selenium , tellurium
and lead with their oxides have this property known as photoconductivity.

3
Picture Reception by photoemission process

In tubes employing
photoemissive target plates. The
electron beam (IN MIDDLE)
deposits some charge on the
target plate, which is proportional
to the light intensity variations in
the scene being televised.

The beam motion is controlled by electric and magnetic fields, it reaches the
target and lands on it with almost zero velocity to avoid any secondary emission .
The scanning beam falls electron on the target plate that causes current flowing
on it. The current variation depends on the resistance of the material.The current
represents brightness variations of the picture. This current is finally made to flow
through load resistance and the intanteneous voltage developed across this
resistance constitutes the video signal.
Picture Reception by photoconduction process

In camera tubes employing photoconductive cathodes .The scanning electron


beam causes a flow of current through the photoconductive material. The
amplitude of this current varies in accordance with the resistance offered by the
surface at different points. Since the conductivity of the material varies in
accordance with the light falling on it, the magnitude of the current represents
the brightness variations of the scene. This varying current completes its path
under the influence of an applied dc voltage through a load resistance connected
in series with path of the current. The instantaneous voltage developed across
the load resistance ; after due amplification and processing its amplitude,
modulated and transmitted it as the video signal.
Typical Internal view of Image Orthicon TV Camera

This tube makes use of the high photoemissive sensitivity obtainable from
photocathodes, image multiplication at the target caused by secondary emission
and an electron multiplier.
Cross-section of Vidicon Camera Tube
The Vidicon came into general use in the early 50‟s and gained immediate
popularity because of its small size and ease of operation. It is functioning on
the principle of photoconductivity.

Cross-section Video Signal capturing


system
Cross-section of Plumbicon Camera Tube
This picture tube has overcome many of the less favourable features of standard
vidicon. It has fast response and produces high quality pictures at low light levels.
Its smaller size and light weight, together with low-power operating
characteristics, makes it an ideal tube for transistorized television cameras.
Silicon diode array Vidicon
This is another variation of vidicon where the
target is prepared from a thin n-type silicon
wafer instead of deposited layers on the glass
faceplate. The final result is an array of silicon
photodiodes for the target plate.

The resulting p-n photodiodes are about 8


µm in diameter. The silicon target plate
thus formed is typically 0.003 cm thick,
1.5 cm square having an array of 540 ×
540 photodiodes. This target plate is
mounted in a vidicon type of camera tube.

The vidicon employing such a multidiode silicon


target is less susceptible to damage or burns due
to excessive high lights. It also has low lag time
and high sensitivity to visible light which can be
S D
extended to the infrared region. The trade name of
this Vidicon is „Epicon‟. Such camera tubes have
wide applications in industrial, educational and
CCTV services.
Solid state image scanner
History of Charged Couple Device (CCD)
coupled Device)
The operation of solid state image scanners is based on the functioning of CCDs in
the MOS circuitry. The CCD is a kind of shift register formed by a string of very
closely spaced MOS capacitors. It can store and transfer analog charge signals—
either electrons or holes—that may be introduced electrically or optically.
Merits of CCD image sensor

1. Small in size and light in weight.

2. Low power consumption, low working voltage.

3. Stable performance and long operational life

4. High sensitivity, low noise.

5. Quick respond, with self-scanning function, small image distortion.

6. Applicable to ultra-large scale integrated circuit, with high


integration of pixel.
CCD working principles
CCD readout techniques
CCD readout

Full Frame Interline Transfer Frame


Transfer(
Imaging area
Progressive & Storage
area)
Interlaced

 Full frame and frame transfer devices tend to be used for scientific applications.
 Interline transfer devices are used in consumer camcorders and TV systems.
CCD readout technique
Full Frame Transfer: In CCDs pixels are formed into columns. Applying appropriate
voltage to vertical electrodes , all pixels shifts along columns with one row down.
This means all image rows move to the next row, only the bottom-most row moves
to horizontal register. Horizontal register can be then shifted by horizontal electrodes
to the output node pixel by pixel & digitalized.

Reading of array CCD means vertical shifts interlaced with horizontal register shifts
and pixel digitization.

FF devices are best suited for astronomy tasks, because they use maximum area to
collect light.
CCD readout technique
Frame Transfer (FT): FT devices comprise two areas, one is Imaging Area (IA) and
second is Storage Area (SA).

When the exposition finishes, image is very quickly transferred from IA to SA.

The SA then can be relatively slowly digitized without smearing the image by
incoming light. This feature is sometimes called electronic shuttering.

Limitations:
1) It does not allow to expose dark frames.
1) Although the SA is shielded from the incoming light, charge can leak to SA from
IA during slow digitization.
2) Price is high.
CCD readout technique
Interline Transfer (IT): IT devices work similarly to FT devices , but their SA is
interlaced with IA.

Only odd columns accumulate light, even columns are covered by opaque shields.
Odd columns are quickly transferred to even columns on the end of exposition, even
columns are then shifted down to horizontal register and digitized.

Interlacing of IA & SA limits the light-collecting area of the chip . It can be


eliminated by using MICROLENSING.

Progressive interline transfer

IT
CCD readout technique
Interlaced Readout: The television signal consists of interlacing images containing
only half rows that only can read half frames. The odd half-frame contains rows 1,
3, 5 etc., the even half-frame contains rows 2, 4, 6, etc.

But if only half of rows is read and the second half is dumped, the CCD sensitivity
would decrease by 50%. This is why the “TV” CCD sensors electronically sums with
neighboring rows.

The odd half-frame begins with single 1st row, followed by sum of 2nd and 3rd rows,
then by sum of 4th and 5th rows etc.

The even half-frame contains sum of 1st and 2nd row, followed by sum of 3rd a
4th rows etc.

Interlaced Interline Transfer sensor


(even half-frame read)

CCDs using this architecture are called interlaced


read sensors, as opposite to sensors capable to
read all pixels at once, called progressive
read sensors.
How to obtain a color image?
The colors red ®, green (G), and blue (B) is used to create all the colors. Each one
of these cells has a one of three different color filters on it; either red, green, or
blue.

Figure 1: Cross-sectional view of Figure 2: Diagram of a typical RGB


a typical CCD cell (pixel) pixel layout(NO NEED TO DRAW IN
EXAM)
How to obtain a color image?
From the TABLE1, the cells are situated in columns of alternating colors such that
R-G , R-G is in one and G-B , G-B is in the one.
Furthermore, the colors can be manipulated as much as is desired.

As once the CCD array is read by the hardware in the camera, software in
the camera runs it through a set of algorithms in order to merge the
intensity data from the CCD's pixels that is then saved into a typical digital
format, such as JPG or TIFF. Typically, one pixel in a JPG or TIFF file is
comprised of four cells (one red, one blue, and two green) from a CCD
array.
R G R G

TABLE:1 G B G B
R G R G
G B G B

R G R G
G B G B
How to obtain a color image?
A simplified example of how these colors are combined through their intensities
and how the cells might charge up for one pixel in a JPG or TIFF file is as
follows:
Let‟s , each cell intensity value of 0 - 255 (8 bits). Also , 1pixel=1 red ,1 blue & 2
green cells.

Now, let's take a 1 second exposure of a blue river. Initially,each cell & sensors
contains 0. As time increases, however, they will begin to charge up to a
maximum value (maximum intensity = 255 - if all cells have an intensity of 255,
the color output is white, if all zero, the color output is black), however, they will
charge up at different rates due to the filters (in this case, blue will charge faster
than green or red).

So after one second, there is more blue than red or green. For instance, after
one second, the red sensor detected an intensity of 50, the green of 80, and the
blue of 150. Once the intensities of the charges are read off from the sensor, the
intensity is then registered inside the software of the camera
Composite video signal

Composite means that the video signal includes several parts.


These parts are:
1) Camera signal corresponding to the desired picture information
2) Synchronizing pulses to synchronize the transmitter and receiver scanning
3) Blanking pulses to make the retrace invisible
These three components are added to produce the composite video signal.
Composite video signal

Composite video
signal for three
consecutive horizontal
lines
Horizontal and vertical blanking pulses in video signal

64 s

160 s

The composite video signal contains blanking pulses to make the retrace( )
lines invisible By raising the signal amplitude to black level, at the time the
scanning circuits produce retraces. All picture information is cut off during
blanking time because of the black level

When the electron beam retraces horizontally from right to left, horizontal
blanking pulses blanking out the scanning lines

When the electron beam retraces vertically from bottom to top vertical blanking
pulses blanking out the scanning lines
Large-screen Television
Vision Technology
Content:
 Types of TV display system
 Liquid crystals
 Polarization of light
 Light modulation through polarization control
 Modulation and display devices
(Digital Light Processing)
Type of display
Monochrome picture tube
Color signal generation

8/10/2005 PhD Defense, Anisur Rahman 5


Liquid Crystals
The liquid crystals are one of the most fascinating material systems in nature,
having properties of liquids (such as low viscosity; ability to conform to the
shape of a container) as well as solid crystal. Their ability to modulate light when
an applied electrical signal is used has made them invaluable in flat panel display
technology.
Liquid crystals have anisotropic similar to solid crystals because of the ordered
way in which some of the constituent molecules are arranged. However, the liquid
crystals have low viscosity and can flow. The liquid crystals are essentially a
stable phase of matter called the mesophase existing between the solid and the
liquid.
There are an essentially unlimited number of liquid crystals that can be formed.
The crystal is made up of organic molecules which are rod-like in shape with a
length of - 20A - 100A.
Type of liquid crystals
The orientation of the rod like molecule defines the "director" of the liquid
crystal. The different arrangements of these rod-like molecules leads to three
main categories of liquid crystals'

Molecules within a layer Liquid Crystal


are ordered
 Long-range orientation
Smectic Nematic Cholesteric
order is present

no well defined layer


order
long-range orientation
order is present

well defined order


within layers
long-range "twist"
between molecules on
each layer
Basic information of LCD

8
8/10/2005 PhD Defense, Anisur Rahman 9
TFT LCD
8/10/2005 PhD Defense, Anisur Rahman 11
8/10/2005 PhD Defense, Anisur Rahman 12
Picture Reception by photoconduction process

In camera tubes employing photoconductive cathodes the scanning electron


beam causes a flow of current through the photoconductive material. The
amplitude of this current varies in accordance with the resistance offered by the
surface at different points. Since the conductivity of the material varies in
accordance with the light falling on it, the magnitude of the current represents
the brightness variations of the scene. This varying current completes its path
under the influence of an applied dc voltage through a load resistance connected
in series with path of the current. The instantaneous voltage developed across
the load resistance is the video signal which, after due amplification and
processing is amplitude modulated and transmitted.
Typical Internal view of Image Orthicon TV Camera

This tube makes use of the high photoemissive sensitivity obtainable from
photocathodes, image multiplication at the target caused by secondary emission
and an electron multiplier.
Cross-section of Vidicon Camera Tube
The Vidicon came into general use in the early 50‟s and gained immediate
popularity because of its small size and ease of operation. It is functioning on
the principle of photoconductivity

Cross-section Video Signal capturing


system
Cross-section of Plumbicon Camera Tube
This picture tube has overcome many of the less favourable features of standard
vidicon. It has fast response and produces high quality pictures at low light levels.
Its smaller size and light weight, together with low-power operating
characteristics, makes it an ideal tube for transistorized television cameras.
Silicon diode array Vidicon
This is another variation of vidicon where the
target is prepared from a thin n-type silicon
wafer instead of deposited layers on the glass
faceplate. The final result is an array of silicon
photodiodes for the target plate.

The resulting p-n photodiodes are about 8


µm in diameter. The silicon target plate
thus formed is typically 0.003 cm thick,
1.5 cm square having an array of 540 ×
540 photodiodes. This target plate is
mounted in a vidicon type of camera tube.

The vidicon employing such a multidiode silicon


target is less susceptible to damage or burns due
to excessive high lights. It also has low lag time
and high sensitivity to visible light which can be
S D
extended to the infrared region. The trade name of
this Vidicon is „Epicon‟. Such camera tubes have
wide applications in industrial, educational and
CCTV services.
Solid state image scanner
History of Charged Couple Device (CCD)
Basic Operation of CCD Device
The operation of solid state image scanners is based on the functioning of charge
coupled devices (CCDs) which is a new concept in metal-oxide-semiconductor (MOS)
circuitry. The CCD may be thought of to be a shift register formed by a string of very
closely spaced MOS capacitors. It can store and transfer analog charge signals—
either electrons or holes—that may be introduced electrically or optically.
Merits of CCD image sensor

1. Small in size and light in weight


2. Low power consumption, low working voltage
3. Stable performance and long operational life, resistant of impact
and vibration
4. High sensitivity, low noise and large dynamic range
5. Quick respond, with self-scanning function, small image distortion,
non-residual image
6. Applicable to ultra-large scale integrated circuit, with high
integration of pixel, accurate size, and low cost
CCD working principles
CCD readout techniques
CCD readout

Full Frame Interline Transfer Frame Transfer

Progressive

Interlaced

 Full frame and frame transfer devices tend to be used for scientific applications.
 Interline transfer devices are used in consumer camcorders and TV systems.
 Frame transfer imager consists of two almost identical arrays, one devoted to
image pixels and one for storage.
 Interline transfer array consists of photodiodes separated by vertical transfer
registers that are covered by an opaque metal shield
CCD readout technique
Full Frame Transfer: Pixels accumulating light are organized into columns in area
CCDs. Applying appropriate voltage to vertical electrodes shifts whole image (all
pixels) along columns one row down. This means all image rows move to the next
row, only the bottom-most row moves to so-called horizontal register. Horizontal
register can be then shifted by horizontal electrodes to the output node pixel by
pixel. Reading of array CCD means vertical shifts interlaced with horizontal register
shifts and pixel digitization.

Full frame devices expose all its area to


light. It is necessary to use mechanical
shutter to cover the chip from incoming
light during readout process else the
incoming light can smear the image. FF
devices are best suited for astronomy
tasks, because they use maximum area to
collect light. Devices with really high QE
are always FF devices.

Kodak full frame CCDs


CCD readout technique
Frame Transfer (FT): FT devices comprise two areas, one exposed to light (Imaging
Area—IA) and second covered by opaque coating (Storage Area—SA). When the
exposition finishes, image is very quickly transferred from IA to SA. The SA then can
be relatively slowly digitized without smearing the image by incoming light. This
feature is sometimes called electronic shuttering.
Limitations:
1) Such kind of shuttering does not allow to expose dark frames.
2) Although the SA is shielded from the incoming light, charge can leak to SA from
IA during slow digitization when imaging bright objects.
3) Important negative side of FT is its price.
CCD readout technique
Interline Transfer (IT): IT devices work similarly to FT devices (they are also
equipped with electronic shutter), but their storage area is interlaced with image
area. Only odd columns accumulate light, even columns are covered by opaque
shields. Odd columns are quickly transferred to even columns on the end of
exposition, even columns are then shifted down to horizontal register and digitized.

Progressive interline transfer

Interlacing of image and


storage columns limits
the light-collecting area
of the chip. This negative
effect can be partially
eliminated by advanced
manufacturing
technologies (like
microlensing).
CCD readout technique
Interlaced Readout: The television signal consists of interlacing images containing
only half rows, so called half-frames. The odd half-frame contains rows 1, 3, 5 etc.,
the even half-frame contains rows 2, 4, 6, etc. Companies producing CCD sensors
followed this convention and created CCD chips for usage in TV cameras, which
also read only half-frames.
But if only half of rows is read and the second half is dumped, the CCD sensitivity
would decrease by 50%. This is why the classical “TV” CCD sensors electronically
sums (see Pixel binning) neighboring rows so that the odd half-frame begins with
single 1st row, followed by sum of 2nd and 3rd rows, then by sum of 4th and 5th rows
etc. The even half-frame contains sum of 1st and 2nd row, followed by sum of 3rd a
4th rows etc.
CCDs using this architecture are called interlaced
read sensors, as opposite to sensors capable to
read all pixels at once, called progressive
read sensors.
Despite the implementation of micro-lenses, the
opaque columns reduces the quantum efficiency of
IT CCDs compared to FF ones.
Frame read CCDs—each two pixels in adjacent
rows shares one pixel in opaque column. Individual
rows are not summed during frame read, but odd Interlaced Interline Transfer sensor
and even half-frames are read sequentially. (even half-frame read)
How to obtain a color image?
The colors red, green, and blue is used to create all the colors. This can be
accomplished by grouping repeating patterns of two alternating cells. Each one of
these cells has a one of three different color filters on it; either red, green, or blue.
A diagram of a typical CCD pixel can be seen in figure 1 and a typical RGB CCD
layout can be seen in figure 2.

Figure 1: Cross-sectional view of Figure 2: Diagram of a typical RGB


a typical CCD cell (pixel) pixel layout
How to obtain a color image?
As can be seen from figure 2, the cells are situated in columns of alternating colors
such that red, green, red, green is in one and blue, green, blue, green is in the one
next to it before the column patters are repeated. Furthermore, the colors can be
manipulated as much as is desired to make the colors appear correct, as once the
CCD array is read by the hardware in the camera, software in the camera runs it
through a set of algorithms in order to merge the intensity data from the CCD's
pixels into color information that is then saved into a typical digital format, such as
JPG or TIFF. Typically, one pixel in a JPG or TIFF file is comprised of four cells (one
red, one blue, and two green) from a CCD array.
How to obtain a color image?
A simplified example of how these colors are combined through their intensities
and how the cells might charge up for one pixel in a JPG or TIFF file is as
follows: First, let's say each cell can have an intensity value of 0 - 255 (8 bits).
Also, one pixel, as previously stated, has one red, one blue, and two green cells.
Now, let's take a 1 second exposure of a blue river. At the beginning of the
exposure, each cell and sensor within it will start out with zero charge in its
bucket. As time increases, however, they will begin to charge up to a maximum
value (maximum intensity = 255 - if all cells have an intensity of 255, the color
output is white, if all zero, the color output is black), however, they will charge
up at different rates due to the filters (in this case, blue will charge faster than
green or red). The charge versus time graphs for each color would look
something like figure three below. So after one second, there is more blue than
red or green. For instance, after one second, the red sensor detected an
intensity of 50, the green of 80, and the blue of 150. Once the intensities of the
charges are read off from the sensor, the intensity is then registered inside the
software of the camera. These intensities are then merged together to form a
single pixel.
Composite video signal

Composite means that the video signal includes several parts.


These parts are:
1) Camera signal corresponding to the desired picture information
2) Synchronizing pulses to synchronize the transmitter and receiver scanning
3) Blanking pulses to make the retrace invisible
These three components are added to produce the composite video signal.
Composite video signal

Composite video
signal for three
consecutive horizontal
lines
Horizontal and vertical blanking pulses in video signal

64 s

160 s

The composite video signal contains blanking pulses to make the retrace lines
invisible by raising the signal amplitude to black level during the time the scanning
circuits produce retraces. All picture information is cut off during blanking time
because of the black level. The retraces are normally produce within the time of
blanking.
The horizontal blanking pulses are included to blank out the retrace from right to
left in each horizontal scanning line. The vertical blanking pulses have the
function of blanking out the scanning lines produced when the electron beam
retraces vertically from bottom to top in each field.
Large-screen Television
Vision Technology
Content:
 Polarization of light
 Light modulation through polarization control
 Modulation and display devices
 Opto-electric effect
 Issues of pixel addressing
Unpolarized light
Light is a transverse wave, it can vibrate in a variety of directions compared
to its direction of motion. In unpolarized light, the fluctuations in the electric
field occur in all directions. It is random.

Direction of wave
motion

Electric field varies in all


directions

Most of the lights we see are unpolarized


Polarized light
Light is a transverse wave, it can be polarized. Longitudinal waves cannot be
Polarized.

In polarized light, the electric field oscillates in only one direction.

Direction of wave
motion

Field varies in only one


direction

S D
Polarizers
Unpolarized, random light can be made to be polarized with the aid of a type of
filter.

Unpolarized light Polarizer Polarized light

The polarizing filter acts like a gate or strain that allows only one direction of
motion.
Pairs of polarizer

1st Polarized 2nd Polarized


Unpolarized
Polarizing light Polarizing light with
light
filter filter reduced
intensity
The end result is polarized light of a particular reduced intensity.

Intensity adjusting polarizer


 With the pair of polarizing filters at a 0 degree angle with each other, a maximum
amount of light emerges.
 With the pair of polarizing filters at a 90 degree angle with each other, a
minimum amount of light emerges, virtually 0 intensity.
 By adjusting the angle between the direction of the two filters, the intensity of
the light can be controlled.
Light modulation through polarization control
A most useful technique to modulate an optical signal is through the use
of polarizers and an active device that can change the polarization of light.
The general approach is illustrated in figure below. In this particular
geometry (other geometries are also possible) two polarizers aligned in
the cross-polarized conflguration are placed on each side of the device.
The device consists of a crystal (or liquid crystal) in which the two
refractive indices nre and nro are different. Also, it is possible to alter the
difference between nre and nro by using an external perturbation. This
alteration can be done by applying an electric field and utilizing an effect
called the electro-optic effect.
Light modulation through polarization control

Let us assume that a linearly polarized light is incident on the crystal and
the x-axis and the y-axis represent the two polarization axes for
the crystal. In general, the two directions have different refractive indices
and, as the wave propagates, a phase difference develops between the
two polarizations. Consider an input signal that is linearly polarized and
given by

After transmission through the modulator, the wave emerges with a general
polarization given by

With the phase difference given by  = 1-2. If / is  is /2, the output


beam is circularly polarized and if it is  is , it is linearly polarized with
polarization 900 with respect to the input beam. If the output beam
passes through a polarizer at 900 with respect to the input beam polarizer
the modulation ratio is given by

Thus if  can be controlled by an electric field, the intensity can be


modulated.
Polarization and modulation properties
of a twisted nematic

An important and accurate approximation that is used to describe how light


propagates (i.e., how the polarization changes) through a twisted nematic crystal is
called the adiabatic approximation. The adiabatic approximation depends upon the
fact that the twist, in the crystal is "slowly varying." This is a good approximation
for liquid crystals, since a twist of /2 is produced over several microns (say - 10-20
m). As a result, the light responds according to the local refractive indices and the
local polarization axes. Thus, if light enters the crystal along the "slow polarization“
direction, it remains along this polarization as it travels down the liquid crystal.

Polarization in a twisted nematic liquid crystal change : i) Due to the phase


difference between the two rays. ii) polarization is rotated due to the twist in the
crystal.
Twisted liquid crystals

In a uniaxial crystal light propagates along the optic axis (the c-axis) with the same
speed regardless of its polarization. The liquid crystal display devices depend upon
the ability to change the c-axis (also known as the director for liquid crystals)
by an external perturbation such as an applied field. Now consider the following
situation: a) The c-axis is parallel to the input polarizer (the refractive index is
nre, for light polarized parallel to the c-axis). In this case the value of  is
maximum and the transmittance for the case where the output polarizer is parallel
to the input polarizer is minimum; b) an applied external perturbation forces the c-
axis to be oriented along the propagation direction so that there is no propagation
delay for light polarized in diflerent orientations. When this happens, the liquid
crystal becomes transparent since light simply propagates at its original
polarization. This can also be seen by putting  = 0.

where  is the phase difference produced due to the difference in the values of
nre and nro and is for a device of thickness d, If  is much larger than , we see
that T approaches zero as is the case where the adiabatic approximation is valid.
Electro-optic effect in liquid crystals
In solid crystals, the effect of the electric field is to alter the anisotropy between
nro and nre so that the phase difference and hence polarization of the
optical signal can be altered. The electric field causes this change by slightly
altering the electron distribution at each atom on the crystal. There is no physical
distortion or reorientation of the atoms since the force created by the electric field
is too small to cause movement of atoms.
Unlike the solid crystals, the liquid crystals are not very rigid. The liquid crystals
are characterized by force constants that are quite small and, as a result, a
relatively low electric field can cause realignment of the atoms. This allows the
optic axis of the liquid crystal to be altered. This is the basis of all modern LCDs.
Distortion in liquid crystal
There are three main types of distortions that can be produced in a nematic
liquid crystal:
i) Splay, where a force causes the rod-like molecules to distort
ii) Twist, which is produced by causing a rotation in the alignment of the molecules
iii) Bends, where the crystal is distorted so that a bend is produced in the rod like
molecules.
The elastic constants defining the energy per unit length to create these distortions
are denoted by K1, K2, and K3, respectively. Typical values of these elastic constants
are in the range of 10-5 to 10-7 dyne.
Orientation of liquid crystal cell
To exploit the ability of the field to alter the optic axis, several possible
configurations of the liquid crystal cells can be used.
i) when light is travelling along the optic axis, there is no change in the polarization
due to the changes in nro and nre, since for this propagation, the two are equal.
ii) when light is propagating in a direction perpendicular to the optical axis the
difference in nre and nro can alter the polarization of light as it travels. In
particular, the polarization change by 900 if the cell thickness is chosen
appropriately.
iii) when light is propagating in a crystal whose optic axis is slowly twisting, the
polarization follows the twist in the crystal.
Orientation of liquid crystal cell
Threshold voltages need to change optic axis

A threshold can be defined above which the torque due to the electric field is
large enough to overcome the restoring elastic torque. The threshold voltage is
given by

Parallel Orientation: Perpendicular Orientation:

Twisted Orientation:

The threshold voltages discussed above, do not produce an abrupt change in the
optic axis from one state to another. The change is non-linear but not entirely
abrupt. Also, it must be kept in mind that even in the transparent state, there is
considerable absorption in the liquid crystal.

Example 9.1 and 9.2 Jasprit Singh


Transmittance of liquid crystal

In an ideal liquid crystal, the


transmittance of the liquid crystal cell
should change abruptly as shown by
the broken curve. However, in real
crystals this is not the case, since the
crystal twist is relieved gradually and
the transmittance change is, therefore,
soft as shown.
Transmittance of STN liquid crystal

A most important effect of the STN


display cell is that when a potential is
applied the change over from the 270o
twist to no twist is very abrupt. As a
result the transmittance-voltage curves
are also extremely sharp. This allows
one to go from the low transmittance
state to high transmittance state and
vice versa with a very small change in
applied voltage. At present, a wide
variety of STN crystals are in use with
twists ranging from 1800 to 2700. The
key attraction of all these structures is
the extremely sharp transmittance-
voltage curve.
Challenges in scaling to a display screen
 LCD has potential application in large display where a million of liquid
crystal cells are used

The important challenges are physics related and processing related

 For a large matrix array, the key challenge of addressing the individual
pixels.

The pixel addressing challenge :


To be competitive with CRT technology LCD technology has to offer
comparable resolution and picture quality. It should offer the
capability of color display as well as gray scale display.

This requires one to have millions of pixel on the display. Addressing of


large number of pixel is the key challenge for flicker-free image to a
human eye. All the pixel elements must be addressed and refreshed, say,
30 times a second, to present a continuous image to the eye. In case of
liquid crystal, voltage level must be maintained between the two plates
enclosing the liquid crystals.
Addressing of pixel in LCD system
 Brute force approach: Individual pixel addressing
When array size increases beyond a few hundred pixels, individual pixel
addressing is not possible.
 Multiplexed/Matrix addressing approach: Place the elements on a matrix
grid and address each pixel one by one by applying appropriate voltage
sequences to the rows and columns.
Matrix addressing approach
 Pixel transmittance responds to the difference between row and column
voltages.
 A strobe signal Vs is applied to the rows.
 Information signal +VD (for OFF) or -VD(for ON) is applied to the
columns.
 Unselected pixels are maintained at a voltage VD.
Static transmission voltage curve of a LC cell
The parameter  describes the non-linear nature of the T-V curve has to be
very small if a large sized matrix is to be addressed.
Matrix addressing approach
The device response is determined by the rms value of the voltage pulse.
The rms value of the voltage pulses over a time period T for the OFF and
ON states
Matrix addressing approach
Matrix addressing approach
VD is very close to the threshold voltage Vth, but as N increases, the value of Vs
increases rapidly. The value of Nmax, depends critically on the rate given by P.
Obviously, for a large value of Nmax, one needs a small /Vth value.

It is quite difficult to increase the


value of Nmax, if /Vth has a value of
say, 0.1, which is typical of twisted
nematic crystals with a 900 twist.
However, supertwisted nematics have a
very small value of P and it is possible
to increase N to approach several
hundred.
Example
CCD readout techniques
CCD readout technique
Full Frame Transfer: Pixels accumulating light are organized into columns in area
CCDs. Applying appropriate voltage to vertical electrodes shifts whole image (all
pixels) along columns one row down. This means all image rows move to the next
row, only the bottom-most row moves to so-called horizontal register. Horizontal
register can be then shifted by horizontal electrodes to the output node pixel by
pixel. Reading of array CCD means vertical shifts interlaced with horizontal register
shifts and pixel digitization.

Full frame devices expose all its area to


light. It is necessary to use mechanical
shutter to cover the chip from incoming
light during readout process else the
incoming light can smear the image. FF
devices are best suited for astronomy
tasks, because they use maximum area to
collect light. Devices with really high QE
are always FF devices.

Kodak full frame CCDs


CCD readout technique
Frame Transfer (FT): FT devices comprise two areas, one exposed to light (Imaging
Area—IA) and second covered by opaque coating (Storage Area—SA). When the
exposition finishes, image is very quickly transferred from IA to SA. The SA then can
be relatively slowly digitized without smearing the image by incoming light. This
feature is sometimes called electronic shuttering.
Limitations:
1) Such kind of shuttering does not allow to expose dark frames.
2) Although the SA is shielded from the incoming light, charge can leak to SA from
IA during slow digitization when imaging bright objects.
3) Important negative side of FT is its price.
CCD readout technique
Interline Transfer (IT): IT devices work similarly to FT devices (they are also
equipped with electronic shutter), but their storage area is interlaced with image
area. Only odd columns accumulate light, even columns are covered by opaque
shields. Odd columns are quickly transferred to even columns on the end of
exposition, even columns are then shifted down to horizontal register and digitized.

Progressive interline transfer

Interlacing of image and


storage columns limits
the light-collecting area
of the chip. This negative
effect can be partially
eliminated by advanced
manufacturing
technologies (like
microlensing).
CCD readout technique
Interlaced Readout: The television signal consists of interlacing images containing
only half rows, so called half-frames. The odd half-frame contains rows 1, 3, 5 etc.,
the even half-frame contains rows 2, 4, 6, etc. Companies producing CCD sensors
followed this convention and created CCD chips for usage in TV cameras, which
also read only half-frames.
But if only half of rows is read and the second half is dumped, the CCD sensitivity
would decrease by 50%. This is why the classical “TV” CCD sensors electronically
sums (see Pixel binning) neighboring rows so that the odd half-frame begins with
single 1st row, followed by sum of 2nd and 3rd rows, then by sum of 4th and 5th rows
etc. The even half-frame contains sum of 1st and 2nd row, followed by sum of 3rd a
4th rows etc.
CCDs using this architecture are called interlaced
read sensors, as opposite to sensors capable to
read all pixels at once, called progressive
read sensors.
Despite the implementation of micro-lenses, the
opaque columns reduces the quantum efficiency of
IT CCDs compared to FF ones.
Frame read CCDs—each two pixels in adjacent
rows shares one pixel in opaque column. Individual
rows are not summed during frame read, but odd Interlaced Interline Transfer sensor
and even half-frames are read sequentially. (even half-frame read)
How to obtain a color image?
The colors red, green, and blue is used to create all the colors. This can be
accomplished by grouping repeating patterns of two alternating cells. Each one of
these cells has a one of three different color filters on it; either red, green, or blue.
A diagram of a typical CCD pixel can be seen in figure 1 and a typical RGB CCD
layout can be seen in figure 2.

Figure 1: Cross-sectional view of Figure 2: Diagram of a typical RGB


a typical CCD cell (pixel) pixel layout
How to obtain a color image?
As can be seen from figure 2, the cells are situated in columns of alternating colors
such that red, green, red, green is in one and blue, green, blue, green is in the one
next to it before the column patters are repeated. Furthermore, the colors can be
manipulated as much as is desired to make the colors appear correct, as once the
CCD array is read by the hardware in the camera, software in the camera runs it
through a set of algorithms in order to merge the intensity data from the CCD's
pixels into color information that is then saved into a typical digital format, such as
JPG or TIFF. Typically, one pixel in a JPG or TIFF file is comprised of four cells (one
red, one blue, and two green) from a CCD array.
How to obtain a color image?
A simplified example of how these colors are combined through their intensities
and how the cells might charge up for one pixel in a JPG or TIFF file is as
follows: First, let's say each cell can have an intensity value of 0 - 255 (8 bits).
Also, one pixel, as previously stated, has one red, one blue, and two green cells.
Now, let's take a 1 second exposure of a blue river. At the beginning of the
exposure, each cell and sensor within it will start out with zero charge in its
bucket. As time increases, however, they will begin to charge up to a maximum
value (maximum intensity = 255 - if all cells have an intensity of 255, the color
output is white, if all zero, the color output is black), however, they will charge
up at different rates due to the filters (in this case, blue will charge faster than
green or red). The charge versus time graphs for each color would look
something like figure three below. So after one second, there is more blue than
red or green. For instance, after one second, the red sensor detected an
intensity of 50, the green of 80, and the blue of 150. Once the intensities of the
charges are read off from the sensor, the intensity is then registered inside the
software of the camera. These intensities are then merged together to form a
single pixel.
Composite video signal

Composite means that the video signal includes several parts.


These parts are:
1) Camera signal corresponding to the desired picture information
2) Synchronizing pulses to synchronize the transmitter and receiver scanning
3) Blanking pulses to make the retrace invisible
These three components are added to produce the composite video signal.
Composite video signal

Composite video
signal for three
consecutive horizontal
lines
Horizontal and vertical blanking pulses in video signal

64 s

160 s

The composite video signal contains blanking pulses to make the retrace lines
invisible by raising the signal amplitude to black level during the time the scanning
circuits produce retraces. All picture information is cut off during blanking time
because of the black level. The retraces are normally produce within the time of
blanking.
The horizontal blanking pulses are included to blank out the retrace from right to
left in each horizontal scanning line. The vertical blanking pulses have the
function of blanking out the scanning lines produced when the electron beam
retraces vertically from bottom to top in each field.
Large-screen Television
Vision Technology
Content:
 Field Emission display
 HDTV
 3D TV
Type of LCD
(TN)
(STN)
(FSTN)

TN displays are suitable for calculators, simple electronic organizers, and


any other numerical displays. STN displays are suitable for mono colour word
processors. Lastly, FSTN displays can produce black/white or full colour, are thin,
light weight, can handle large capacity, have high contrast and can respond fast to
changes. FSTN displays are suitable for word processors and low-end colour
displays.
Passive matrix display
In the passive matrix liquid crystal displays, row and column signals are used to input
information into the display matrix. There are no non-linear elements (switches) at
the individual cells so that the signal applied to each pixel for a small fraction of the
refresh cycle. A typical passive matrix display is shown in figure below

Passive Matrix – a simple grid supplies is used to charge a particular pixel on the
display. Slow response time and imprecise voltage control.
Active matrix display
In case of active matrix display a switching device is used at each pixel. This switch
allows the signal voltage to be applied to the liquid crystal cell for the entire cycle
time between refreshes. This leads to better overall performance and most
importantly allows one to use a 900 twist in the liquid crystal.
Pixel addressing
Segmented displays are driven by individual wire connections. Each segment had
its own connection and can be turned on or off by applying a voltage.

Multiplexed passive screens


were the solution to creating
larger LCDs. In a ten by ten
array of pixels one hundred
separate connections would be
needed to be able to address
all of them. If the lines were
multiplexed then only 20
connections would be
needed (one for each row and
column).

If the number of multiplexed lines increased the contrast ratio decreased. This is
due to the ratio of voltage at a selected point (for example a pixel) and an
unselectedpoint is a decreasing function of the number of rows being multiplexed.
The relation is shown below:
 Crosstalk occur
 Contrast reduces
Pixel addressing
Different elements of LCD
Main parts of LCD:
 Backlight
 Polarizer
 Glass Substrate
 Pixel electrodes (ITO)
 Thin film transistors (TFTs)
 Liquid crystal layer
 Top electrode
 Black matrix
 RGB color filter array
 Glass
 Polarizer
How does a TV screen make its picture?
Each one of the pixels is effectively a separate red, blue, or green light that can be
switched on or off very rapidly to make the moving color picture. The pixels are
controlled in completely different ways in plasma and LCD screens. In
a plasma screen, each pixel is a tiny fluorescent lamp switched on or off
electronically. In an LCD television, the pixels are switched on or off electronically
using liquid crystals to rotate polarized light.

How pixels are switched off:


 Light travels from the back of the TV toward the front from a large bright light.
 A horizontal polarizing filter in front of the light blocks out all light waves except
those vibrating horizontally.
 Only light waves vibrating horizontally can get through.
 A transistor switches off this pixel by switching on the electricity flowing through
its liquid crystal. That makes the crystal straighten out (so it's completely
untwisted), and the light travels straight through it unchanged.
 Light waves emerge from the liquid crystal still vibrating horizontally.
 A vertical polarizing filter in front of the liquid crystal blocks out all light waves
except those vibrating vertically. The horizontally vibrating light that travelled
through the liquid crystal cannot get through the vertical filter.
 No light reaches the screen at this point. In other words, this pixel is dark.
How does a TV screen make its picture?
How pixels are switched ON:
 The bright light at the back of the screen shines as before.
 The horizontal polarizing filter in front of the light blocks out all light waves
except those vibrating horizontally.
 Only light waves vibrating horizontally can get through.
 A transistor switches on this pixel by switching off the electricity flowing through
its liquid crystal. That makes the crystal twist. The twisted crystal rotates light
waves by 90° as they travel through it.
 Light waves that entered the liquid crystal vibrating horizontally emerge from it
vibrating vertically.
 The vertical polarizing filter in front of the liquid crystal blocks out all light waves
except those vibrating vertically. The vertically vibrating light that emerged from
the liquid crystal can now get through the vertical filter.
 The pixel is lit up. A red, blue, or green filter gives the pixel its color.

S D
Alternative Displays
Display technology must evolve to keep pace with advances in other areas of
technology. This evolution in display technology will produce displays that are faster,
brighter, lighter, and more power-efficient. Technologies that have emerged to meet
this challenge are OLEDs, DLP technology, Plasma, FEDs, and Electronic Paper

Organic Light Emitting Diodes (OLEDs):


One of the next trends in display technology is Organic Light Emitting Diodes
(OLEDs). Polymer Light Emitting Diodes (PLEDs), Small Molecule Light Emitting
Diodes (SMOLEDS) and dendrimer technology are all variations of OLEDs. With all
variations being made by electroluminescent substances (substances that emit
light when excited by an electric current),
OLED displays are brighter, offer more contrast, consume less power, and offer
large viewing angles –all areas where LCDs fall short.
OLED Structure
OLED Structure
OLEDs work in a similar way to conventional diodes and
LEDs, but instead of using layers of n-type and p-type
semiconductors, they use organic molecules to produce
their electrons and holes. A simple OLED is made up of six
different layers. On the top and bottom there are layers of
protective glass or plastic. The top layer is called
the seal and the bottom layer the substrate. In between
those layers, there's a negative terminal (sometimes
called the cathode) and a positive terminal (called the
anode). Finally, in between the anode and cathode are two
layers made from organic molecules called the emissive
layer (where the light is produced, which is next to the
cathode) and the conductive layer (next to the anode).
Here's what it all looks like:
How an OLED emits light
 To make an OLED light up, we simply attach a voltage (potential
difference) across the anode and cathode.
How does this  As the electricity starts to flow, the cathode receives electrons
sandwich of layers from the power source and the anode loses them (or it "receives
make light? holes," if you prefer to look at it that way).
 Now we have a situation where the added electrons are making
the emissive layer negatively charged (similar to the n-type layer in
a junction diode), while the conductive layer is becoming positively
charged (similar to p-type material).
 Positive holes are much more mobile than negative electrons so
they jump across the boundary from the conductive layer to the
emissive layer. When a hole (a lack of electron) meets an electron,
the two things cancel out and release a brief burst of energy in the
form of a particle of light—a photon, in other words. This process
is called recombination, and because it's happening many times a
second the OLED produces continuous light for as long as the
current keeps flowing.

We can make an OLED produce colored light by adding a colored


filter into our plastic sandwich just beneath the glass or plastic top
or bottom layer. If we put thousands of red, green, and blue OLEDs
next to one another and switch them on and off independently, they
work like the pixels in a conventional LCD screen, so we can
produce complex, hi-resolution colored pictures.
LED display

 A LED pixel module is made up of 4+


LEDs of RGB.
 LED displays are made up of many such
modules.
 Several wires run to each LED module,
so there are a lot of wires running
behind the screen.
 Turning on a jumbo screen can use a lot
of power.
Plasma display
A plasma screen is similar to an LCD, but each pixel is
effectively a microscopic fluorescent lamp glowing with
plasma. A plasma is a very hot form of gas in which
the atoms have blown apart to make negatively
charged electrons and positively charged ions (atoms
minus their electrons). These move about freely,
producing a fuzzy glow of light whenever they collide.
Plasma screens can be made much bigger than
ordinary cathode-ray tube televisions, but they are also
much more expensive.

Much like the picture in an LCD screen, the picture made by a plasma TV is made from an
array (grid) of red, green and blue pixels (microscopic dots or squares).
Each pixel can be switched on or off individually by a grid of horizontally and vertically
mounted electrodes (shown as yellow lines).
Suppose we want to activate one of the red pixels (shown hugely magnified in the light
gray pullout circle on the right).
The two electrodes leading to the pixel cell put a high voltage across it, causing it to ionize
and emit ultraviolet light (shown here as a turquoise cross, though it would be invisible in the
TV itself).
The ultraviolet light shines through the red phosphor coating on the inside of the pixel cell.
The phosphor coating converts the invisible ultraviolet into visible red light, making the
pixel light up as a single red square.
Advantages & Disadvantages of plasma display
Advantages:
 Every single pixel generates its own light and as a result viewing angles are large,
approximately 1600, and.
 image quality is superior and it is not affected as the display area becomes larger;
 plasma displays can be built in dimensions nearing 2m.
 plasma displays are able to provide image quality and display size without the
disadvantage of being bulky and blurry around the edges;
 It can generally be built with a depth of 15-20 cm and as a result can be mounted
or used in space limited areas.
Disadvantages:
 Due to the fragile nature of plasma screens (it utilizes glass panels as a substrate),
professional installation is required.
 PDPs are susceptible to burn-in from static images and as a result they are not
suitable for billboard-type displays, or channels that broadcast the same image
constantly, i.e. news station logos.
 Ionizing the plasma requires a substantial amount of power; consequently, a 38-
inch color plasma display can consume up to 700 W (power levels generally used
by appliances such as vacuum cleaners) where the same sized CRT would only
require 70 W.
 many other high quality display technologies can replace plasma displays and
hence render it useless in the future.
Field Emission Displays (FEDs)
Field emission displays (FEDs) function much like CRT technology. Instead of
using one electron gun to emit electrons at the screen, FEDs use millions of
smaller ones. The result is a display that can be as thin as an LCD, reproduce
CRT-quality images, and be as large as a plasma display. Initial attempts in
making emissive, flat-panel displays using metal tipped cathodes occurred nearly
20 years ago, however, with reliability, longevity, and manufacturing issues, these
types of FEDs do not seem commercially viable.
Electron emission in FED
The emitted current, or moving electrons, depends on the electric field strength, the emitting
surface, and the work function. In order for field emission to function, the electric field has to
be extremely high: up to 3 x 107 V/cm. This value, though large, is accessible by the fact that
field amplification increases with a decreasing curvature radius indicating that the pointier the
object, the more charge it will have at its tip, and hence the larger the electric field. As a result,
if such a material can be found, a moderate voltage will cause the tunneling effect, and hence
allow electrons to escape into free space without the heating of the cathode like the traditional
Cathode Ray Tube (CRT) technology.

Electric field concentration around a pointy object

The basic structure of the first FED was comprised of millions of vacuum tubes, called micro-
tips. Each tube was red, green, or blue and together, formed one pixel. These micro-tips were
sharp cathode points made from molybdenum from which electrons, under a voltage
difference, would be emitted towards a positively charged anode where red, blue, and green
phosphors were struck, and as a result emit light through the glass display. Unlike CRTs, color
was displayed sequentially, meaning the display processed all the green information first, then
refreshed the screen with red information, and finally blue.
Merits and demerits of FED
Merits:
 The FED only produced light when the pixels are “on”, and as a result power
consumption is dependent on the display content.
 A FED generates light from the front of the pixel, providing an excellent
viewing angle of 160 degrees both vertically and horizontally.
 The FED does not suffer no brightness loss even if 20% of the emitters failed.
Demerits:
 One problem being the metal molybdenum, used to make the micro-tips,
would become so heated that local melting would result and consequently
deform its sharp tips needed to form the electric field used for electron
emission.
 Another problem caused by the electrical environment is the hot cathodes
would react with the residual gases in the vacuum consequently reducing the
field emission even more.
A carbon nanotube structure
CNTs are chemically stable therefore they only react
under extreme conditions such as extremely high
temperatures (2500°C) with oxygen or hydrogen;
consequently, the problems of reacting with
resident gases, overheating, or tip deformation are
solved with CNTs.
Display Technology Comparison Chart
High-definition television (HDTV)
High-definition television (HDTV) provides a resolution that is substantially
higher than that of standard-definition television (SDTV). HDTV is a digital
TV broadcasting format where the broadcast transmits widescreen pictures with
more detail and quality than found in a standard analog television, or other digital
television formats.

HDTV may be transmitted in various formats:


•1080p: 1920×1080p: 2,073,600 pixels (~2.07 megapixels) per frame
•1080i: 1920×1080i: 1,036,800 pixels (~1.04 MP) per field or 2,073,600 pixels
(~2.07 MP) per frame
• A non-standard CEA resolution exists in some countries such as
1440×1080i: 777,600 pixels (~0.78 MP) per field or 1,555,200 pixels
(~1.56 MP) per frame
•720p: 1280×720p: 921,600 pixels (~0.92 MP) per frame
The letter "p" here stands for progressive scan while "i" indicates interlaced.
When transmitted at two megapixels per frame, HDTV provides about five times
as many pixels as SDTV.

HDTV essentially means the picture is much more detailed, a bit wider, and it
doesn't flicker, even when it's shown on really big screens.
How does a 3D TV work?
Our brains generate a 3D picture largely by having two eyes spaced a short
distance apart. Each eye captures a slightly different view of the world in front of it
and, by fusing these two images together, our brains generate a single image that
has real depth. This trick is called stereopsis (or stereoscopic vision).

The basic principle of 3D TV working is that there are 2 images being shown on
the TV screen simultaneously, one for the right eye and the other for the left eye.
When the viewer sees the two separate images, he thinks that he is seeing a 3D
image.

There are several different ways of making a 3D TV, but all of them use the same
basic principle: they have to produce two separate, moving images and send one
of them to the viewer's left eye and the other to the right. To give the proper illusion
of 3D, the left eye's image mustn't be seen by the right eye, while the right eye's
image mustn't be seen by the left.

21
3D technologies
Here's a quick summary of the four most common 3D TV technologies. In these
diagrams, we're looking down on a person's head from above and comparing how
two different images enter their two eyes in each case:
 Anaglyph: We have to wear eye glasses with colored lenses so our brain can
fuse together the partly overlapping red and cyan pictures on the screen.
 Polarizing: We wear lenses that filter light waves in different ways so each eye
sees a different picture.
 Active-shutter: The left and right lenses of our glasses open and close at high
speed, in rapid alternation, to view separate images (frames) shown on the
screen.
• Lenticular: We don't need glasses with this system. Instead, a row of plastic
lenses in front of the screen bends slightly different, side-by-side images so
they travel to your left and right eyes. We must sit in the right place to see a 3D
image.
3D technologies
3D TV with active glasses
In a 3D TV with active glasses, the pictures for the right and left eye are shown
one after the other. The active 3D glasses for watching 3D TV have electronic
shutters that blind vision to one eye while the picture meant for the other eye is on
the TV. This process is repeated very fast at a rate of about 60 TV images per
second. The left and right side lenses of the active 3D glasses opens and shuts
synchronizing with signals emitted by the 3D TV to the 3D glasses.
3D TV with passive glasses
In 3D TV with passive glasses there is a polarising screen on the passive type 3D
TV which polarizes the light coming out from the TV image. Thus the light from the
images on the passive 3D TV are polarized either horizontally or vertically and the
special passive 3D glasses have lenses to see horizontally polarized light in one
side, say the left lens and the other right side lens can only see vertically polarized
light. Thus each eye gets the correct 3D picture frame through the different
polarising effect given to passive 3D TV images. Passive 3D glasses are very
reliable unlike the complicated active 3D glasses which require to be powered by
batteries to open and close each side of the lens according to signals received from
the 3D TV.
Luminescence

Contents
 What is Luminescence?
 Different type of Luminescence

1
What is Luminescence?
Initial state Final state

Absorption

Electron excitation Electron in excited


by photon or states and
electrical field Electron-Hole pair
Luminescence
generated
Mechanism

Emission of photon
by an electron-hole
recombination
process

Emission of light by an electron-hole recombination


process is generally referred to as Luminescence.
Photoluminescence

In luminescence, emission of radiation requires the initial excitation of electrons. If


the electron excitation is due to photon absorption, then the process is identified
as photoluminescence.

Initial state Final state

Absorption

Electron Electron-Hole
Luminescence pair generated
excitation Mechanism
by Photon

Emission of photon
by an electron-hole
recombination
process
Fluorescence

The direct electron-hole recombination mechanism generally occurs very


quickly, typical in the range of nanoseconds, so light emission from a
semiconductor stops within nanoseconds after the removal of excitation.
Such luminescence processes are normally identified as fluorescence.

Fluorescent
coating

The emission of light


from a fluorescent
tube is actually a
fluorescence process
Phosphorescence

There are also materials called phosphors, from which light emission may
continue for milliseconds to hours after the cessation of excitation. These
slow luminescence processes normally referred to as phosphorescence.

5
Cathodoluminescence
It is also possible to excite electron into the CB by bombarding the material with
a high energy electron beam. If these electrons recombine with holes emit light,
then the process is called cathodoluminescence.

6
Electroluminescence

In electroluminescence an electric current, either ac or dc, is used to excite


or inject electron into the CB. They then recombine with holes and emit
light. The emission of light in LED is an example of electroluminescence.
Incandescence

Light emitted from an ordinary light bulb is due to the heating of the metal
filament. The emission of radiation from a heated object is called incandescence

S-ar putea să vă placă și