Sunteți pe pagina 1din 55

23092011

VISION FOR INDUSTRIAL AND SERVICE ROBOTS AN INTRODUCTION R.SENTHILNATHAN RESEARCH SCHOLAR DEPARTMENT OF PRODUCTION
VISION FOR INDUSTRIAL AND
SERVICE ROBOTS
AN INTRODUCTION
R.SENTHILNATHAN
RESEARCH SCHOLAR
DEPARTMENT OF PRODUCTION TECHNOLOGY
MIT CAMPUS, ANNA UNIVERSITY CHENNAI

Analogies

Profound,Widespread and Global

Mental to Physical Leverage

Mainframes to Industrial Robots

PCs to Service Robots

The Price and Volume curves

Software and Applications

Third Party Applications

Mobile, Personal and Household

Price and Volume curves    Software and Applications Third Party Applications Mobile, Personal and
Price and Volume curves    Software and Applications Third Party Applications Mobile, Personal and

3

Revolution  Agricultural Revolution  Industrial Revolution  Electrification  Transportation 

Revolution

Agricultural Revolution

Industrial Revolution

Electrification

Transportation

Communication

Computers

Industrial Robots

Service Robots

 Electrification  Transportation  Communication  Computers  Industrial Robots  Service Robots 2

2

Industrial Robot

From ISO 8373

An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may either fixed in place or mobile for use in an industrial automation application.

in three or more axes , which may either fixed in place or mobile for use
in three or more axes , which may either fixed in place or mobile for use
in three or more axes , which may either fixed in place or mobile for use

4

23092011

Service Robot

Provisional from Working Group

A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations.

to perform services useful to the well-being of humans and equipment, excluding manufacturing operations . 5
to perform services useful to the well-being of humans and equipment, excluding manufacturing operations . 5

5autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations .

Perception to Physical Access

How Humans do?

Locate with eyes

Calculate target with brain

Guide with arm and fingers

How Robots do?

Locate with camera

Calculate target with software

Guide with robot and grippers

 How Robots do?    Locate with camera Calculate target with software Guide with
 How Robots do?    Locate with camera Calculate target with software Guide with

6fingers  How Robots do?    Locate with camera Calculate target with software Guide

 

Role of Perception in Robot Manipulation

 

Role of Perception in Robot Manipulation

Where am I relative to the world?

How can I safely interact with environment ?

 

sensors: vision, stereo, range sensors, acoustics

 

sensors: vision, range, haptics (force+tactile)

problems: scene modeling/classification/recognition

problems: structure/range estimation, modeling, tracking, materials, size, weight, inference

integration: localization/mapping algorithms (e.g. SLAM)

 

integration: navigation, manipulation, control, learning

 

What is around me?

 
 

sensors: vision, stereo, range sensors, acoustics, sounds, smell

 

How can I solve “new” problems (generalization)?

problems: object recognition, qualitative modeling

 

sensors: vision, range, haptics.

integration: collision avoidance/navigation, learning

problems: categorization by function/shape/context

 

integrations: inference, navigation, manipulation, control, learning

7 8

7

7 8

8

23092011

Vision for Robots

Vision for Robots  Vision for Industrial Robots: GIGI  Gauging  Inspection  Guidance 
Vision for Robots  Vision for Industrial Robots: GIGI  Gauging  Inspection  Guidance 
Vision for Robots  Vision for Industrial Robots: GIGI  Gauging  Inspection  Guidance 

Vision for Industrial Robots: GIGI

Gauging

Inspection

Guidance

Identification

Vision for Service Robots

Preprocess Environment

Sensor Fusion

9Inspection  Guidance  Identification  Vision for Service Robots   Preprocess Environment Sensor Fusion

Building Blocks in Design of Vision Guided Robots

Lighting

Technique (Frontlight, Backlight)

Source (Fluorescent tubes, Halogen and xenon lamps, LED, Laser)

Optics

Vision Cameras

Type of sensor (CCD, CMOS etc)

Spec. of Camera (Resolution, Frame rate, etc)

Type of Camera (Line Scan,Area Scan, Structured Light,Time of Flight)

Interface (Standalone, Computer Interface)

Software

Robot Types

Camera Mounting (Eye in Hand, Eye to Hand)

(Standalone, Computer Interface)  Software  Robot Types  Camera Mounting (Eye in Hand, Eye to

11

Industry vs. Service Sector

Strength if Industry Sector

Many Years of Experience

Motors, Speed, Precision

Vision, Force,Torque, Encoders

Vision Frontiers in Service Robots

Uncontrolled Environment and Safety

Sensor Fusion and Reliability

 Vision Frontiers in Service Robots  Uncontrolled Environment and Safety  Sensor Fusion and Reliability

10

Scene

Parts

Discrete parts or endless material (e.g., paper)

Minimum and maximum dimensions

Changes in shape

Description of the features that have to be extracted

Changes of these features concerning error parts and common product variation

Surface finish

Color

Corrosion, oil films, or adhesives

Changes due to part handling

variation  Surface finish  Color  Corrosion, oil films, or adhesives  Changes due to

12

23092011

Contd.

Part Presentation

indexed positioning

continuous movement

If there is more than one part in view, the following topics are important:

number of parts in view

overlapping parts

touching parts

in view, the following topics are important:  number of parts in view  overlapping parts

13

Robot Configuration

Industrial

1/2/3 Axis Cartesian

4-Axis SCARA

6-Axis Articulated

Gantry Type

Robot Configuration  Industrial  1/2/3 Axis Cartesian  4-Axis SCARA  6-Axis Articulated  Gantry

15

Industrial Robots - Applications

Industrial Robots - Applications 14
Industrial Robots - Applications 14

14

Camera Mounting

Camera Mounting Eye to Hand Eye in Hand 16

Eye to Hand

Camera Mounting Eye to Hand Eye in Hand 16

Eye in Hand

Camera Mounting Eye to Hand Eye in Hand 16

16

23092011

Applications 2D 3D 2.5D • Indexed Conveyor • Auto racking • Stacked Objects • Flexible
Applications
2D
3D
2.5D
• Indexed Conveyor
• Auto racking
• Stacked Objects
• Flexible Feeding
• Discrete Bag
• Geometry for depth
• Autoracking
Handling
perception
• Packaging
• Palletizing
• Bin Picking
17
Building Blocks in Service Robots  3D Vision  3D Vision with Real Time Motion

Building Blocks in Service Robots

3D Vision

3D Vision with Real Time Motion

3D Vision with GPS navigation

3D Vision with SLAM

Simultaneous Localization and Mapping

Sensing the Environment

Modeling the Environment

3D Vision with Sonar Navigation

3D Visualization with Haptic Controls

 Modeling the Environment  3D Vision with Sonar Navigation  3D Visualization with Haptic Controls

19

Advantages of Vision for Industrial Robots

Labor Savings: Often alone justifies

Throughput Gains in Production

Quality Improvements

Safety and Medical Cost Savings

Flexible Change to Multiple Products

Floor Footprint Reduction

Reutilize Conveyors, Racks, Bins

 Flexible Change to Multiple Products  Floor Footprint Reduction  Reutilize Conveyors, Racks, Bins 18

18

Classes of Service Robots

Aerospace

Spacecraft, Satellites,Aircraft

Land

Defenseand security, Farming,Wildlife, Food, Transportation, Outdoor Logistics, Office and Warehouse, Health: Care, Rehabilitation, Surgical, Entertainment, Entertainment

Water

Defense and security, Research and Exploration, Preventive Maintenance, Rescue and Recovery.

 Water  Defense and security, Research and Expl oration, Preventive Maintenance, Rescue and Recovery. 20

20

23092011

Software

2D Object Recognition

Edge Detection

Boundary analysis

Geometric Pattern Matching

3D Object Recognition

Mono camera if geometry is consistent

Stereo matching: Redundant reliability

Laser, Range or time of flight methods

Additional Techniques

Projected points/Lines of Light

3D volume Scans

Scene-specific Heuristics

 Additional Techniques  Projected points/Lines of Light  3D volume Scans  Scene-specific Heuristics 21

21

IMAGING

FUNDAMENTALS

IMAGING FUNDAMENTALS 23

23

 Mobile Robots for Defence  Rehabilitation  Surgical  Innovation  Human Like 
 Mobile Robots for Defence
 Rehabilitation
 Surgical
 Innovation
 Human Like
 Swimming
 Flying
22
Any Digital Image, irrespective of its type is a 2D array of numbers 24

Any Digital Image, irrespective of its type is a 2D array of numbers

Any Digital Image, irrespective of its type is a 2D array of numbers 24

24

23092011

Types

Intensity images

Range images

Types  Intensity images  Range images 25
Types  Intensity images  Range images 25

25

Basic Optics The Thin Lens model: Fundamental Equation where 27

Basic Optics

The Thin Lens model:

Fundamental Equation

Basic Optics The Thin Lens model: Fundamental Equation where 27

where

Basic Optics The Thin Lens model: Fundamental Equation where 27
Basic Optics The Thin Lens model: Fundamental Equation where 27

27

Intensity Images

Optical parametres

Lens type

Focal length

FOV

Photometric parameters

Intensity

Direction of illumination

Reflectance properties

Sensor’s structure

Geometric parameters

Types of Projections

Pose of the camera

properties  Sensor’s structure  Geometric parameters  Types of Projections  Pose of the camera

26

Perspective Camera Model

Perspective Camera Model 28
Perspective Camera Model 28

28

23092011

Pin Hole Model 29 30 Mapping Point to Camera 31 32
Pin Hole Model
29
30
Mapping Point to Camera
31
32

23092011

33
33

33

35
35

35

34
34
36
36

36

23092011

Camera Parameters and Calibration World Coordinate  Extrinsic parameters  Intrinsic parameters 37 38 World
Camera Parameters and Calibration
World Coordinate
 Extrinsic parameters
 Intrinsic parameters
37
38
World Coordinate
39
40

23092011

Perspective Projection

A point p[x,y,z] in the image plane is given by

x = f [ X / Z]

y = f [ Y / Z]

where p is the image of the point P[X,Y,Z] in world space.

image plane is given by x = f [ X / Z] y = f [
image plane is given by x = f [ X / Z] y = f [

41

Range Images – Display types

Range Images – Display types 43
Range Images – Display types 43

43

Range Images

Reconstructing a 3D shape from a single intensity image is DIFFICULT.

Range Images

Are also called as depth images, depth maps, xyz maps, surface profiles and 2.5D images.

Each pixel of a range image expresses the distance between a known reference frame and a visible point in the scene.

Forms of Range Images

Cloud of points (xyz form)

Spatial form

frame and a visible point in the scene.  Forms of Range Images  Cloud of

42

Agenda

Physics of Light

Optics

Camera Sensors

Camera Interface

Camera Calibration

Software

Applications and Case Study

Optics  Camera Sensors  Camera Interface  Camera Calibration  Software  Applications and Case

44

23092011

PHYSICS OF LIGHT

PHYSICS OF LIGHT 45

45

Vision Starts with Light

Light has a dual nature, obeying the laws of physics as

both a transverse wave (electromagnetic radiation) and as

a particle of energy (photon).

the laws of physics as both a transverse wave ( electromagnetic radiation ) and as a

47

Why Physics of Light ?

The laws of physics govern the properties of vision systems.

Understanding the physics will allow you to predict the behavior

You will understand the limitations of the performance

the physics will allow you to predict the behavior  You will understand the limitations of

46

Properties of Light

Electromagnetic Radiation

Used to explain the propagation of light through various

substances.

Particle

Used to explain the interaction of light and matter that

result in a change in energy, such as in a video sensor

Used to explain the interaction of light and matter that result in a change in energy,

48

23092011

Light as an Electromagnetic Radiation

Light is a Transverse wave.

Points oscillate in the same plane on a axis perpendicular to the direction of motion.

The electrical wave

oscillates perpendicular to

the magnetic wave.

axis perpendicular to the direction of motion.  The electrical wave oscillates perpendicular to the magnetic
axis perpendicular to the direction of motion.  The electrical wave oscillates perpendicular to the magnetic

49

Energy vs Intensity

Energy is determined by the frequency of oscillation

Higher frequency

Shorter wavelength

Higher energy

Intensity is determined by the amount of radiation.

Higher frequency  Shorter wavelength  Higher energy  Intensity is determined by the amount of

51

Electromagnetic Wave Characteristics

Frequency (f) is the no. of oscillations per second.

Wavelength (λ) is the distance between two points in the same position on the wave (nm)

f = c / λ where c is the speed of light

) is the distance between two points in the same position on the wave (nm) f

50

Electromagnetic Spectrum

Electromagnetic Spectrum 52
Electromagnetic Spectrum 52

52

23092011

Relation between Color of Light and its Wavelength

Visible light contains a continuum of frequencies

We perceive color as a result of predominance of certain wavelengths of light

The eye responds to visible light with varying efficiencies across the visible spectrum

Cameras have a very different response

We are concerned with a narrow region of the spectrum

Ultraviolet Visible Infrared

While the eye can see only in the visible spectrum, the energy above and below visible light is also important to machine vision.

53
53
can see only in the visible spectrum, the energy above and below visible light is also

In Vision We are Concerned with Reflected Light

Reflected Light is controlled by engineering the lighting.

The reflected light (and therefore the digital image) is

impacted by

Geometry of everything

Color of the light

Color of the part

therefore the digital image) is impacted by  Geometry of everything  Color of the light

55

What happens to Light when it hits an Object ?

What happens to Light when it hits an Object ? 54
What happens to Light when it hits an Object ? 54

54

How do Objects of Different Color Respond to Light ?

How do Objects of Different Color Respond to Light ? 56
How do Objects of Different Color Respond to Light ? 56
How do Objects of Different Color Respond to Light ? 56

56

23092011

How and Why Objects Have Color?

Red light gets reflected from red objects.

Your eyes see the reflected light.The camera also see the reflected light.

All other color gets absorbed by the material.This radiation gets turned into heat.

also see the reflected light. All other color gets absorbed by the material.This radiation gets turned
also see the reflected light. All other color gets absorbed by the material.This radiation gets turned

57

Subtractive Color

Used to describe why objects

appear the color they do

Pigments added to paint will

absorb all colors of that

wavelength

CMYK used for printing ink (K

is for carbon black, less

expensive pigment than other

colors)

of that wavelength  CMYK used for printing ink (K is for carbon black, less expensive
of that wavelength  CMYK used for printing ink (K is for carbon black, less expensive

59

Additive Color

Demonstrates what happens when colored lights are mixed together

Additive primaries are red, green and blue which altogether make white

RGB used for color TV and Cameras

Additive primaries are red , green and blue which altogether make white  RGB used for
Additive primaries are red , green and blue which altogether make white  RGB used for

58

Maxwell’s Triangle – A demonstration of Additive Color Mixing

Maxwell’s Triangle – A demonstration of Additive Color Mixing CIE Chromaticity Diagram 60
Maxwell’s Triangle – A demonstration of Additive Color Mixing CIE Chromaticity Diagram 60

CIE Chromaticity Diagram

Maxwell’s Triangle – A demonstration of Additive Color Mixing CIE Chromaticity Diagram 60

60

23092011

White Light is Actually Very Colorful

The rainbow exiting a prism or

seen in the sky is the inverse of

the additive color wheel.

Both demonstrate that “white

light” is actually a very complex

function which needs precise

definition.

wheel.  Both demonstrate that “white light” is actually a very complex function which needs precise
wheel.  Both demonstrate that “white light” is actually a very complex function which needs precise
wheel.  Both demonstrate that “white light” is actually a very complex function which needs precise

61

Optical Filter

An optical device which selectively transmits light of certain wavelengths and absorbs or reflects all other wavelengths.

device which selectively transmits light of certain wavelengths and absorbs or reflects all other wavelengths. 63
device which selectively transmits light of certain wavelengths and absorbs or reflects all other wavelengths. 63

63

OPTICS 62

OPTICS

OPTICS 62
OPTICS 62

62

Using Filters to Highlight Objects of Different Colors

Using Filters to Highlight Objects of Different Colors Red light reflects off the red background but

Red light reflects off the red background but absoebed by the blue circle.

Blue light reflects off the blue circle but but absorbed by the red background.

but absoebed by the blue circle. Blue light reflects off the blue circle but but absorbed

64

23092011

Placement of Filters: Incident or Reflected Light

Placement of Filters: Incident or Reflected Light Resulting Images would be the same:  No available
Placement of Filters: Incident or Reflected Light Resulting Images would be the same:  No available

Resulting Images would be the same:

No available red light to be reflected, so red appears dark.

Light is reflected from blue, appears light.

available red light to be reflected, so red appears dark .  Light is reflected from

65

Spectral Response

How efficiently light is emitted or received as the wavelength (or color) of the light changes.

Filters can be described by a spectral response plot.

as the wavelength (or color) of the light changes.  Filters can be described by a
as the wavelength (or color) of the light changes.  Filters can be described by a
as the wavelength (or color) of the light changes.  Filters can be described by a

67

An example

An example 66
An example 66

66

Spectral Reflectivity for Al and Ag

Spectral Reflectivity for Al and Ag 68
Spectral Reflectivity for Al and Ag 68

68

23092011

Spectral Response of Filters

Ideal Filter

Spectral Response of Filters Ideal Filter Real Filter 69

Real Filter

Spectral Response of Filters Ideal Filter Real Filter 69
Spectral Response of Filters Ideal Filter Real Filter 69

69

Interaction of Light with Surfaces

Interaction of Light with Surfaces Reflection Refraction 71

Reflection

Interaction of Light with Surfaces Reflection Refraction 71

Refraction

Interaction of Light with Surfaces Reflection Refraction 71

71

Spectral Response of CCD

Spectral Response of CCD 70
Spectral Response of CCD 70

70

Why Reflection is Important ?

Majority of the vision systems record the reflected light.

A well designed lighting system provides high contrast between the features of interest and background (noise).

Regions of high reflectivity, regions of minimal reflected light.

Spectral properties of light sources, combined with spectral properties of surface can be used to provide high contrast.

Geometrical

for

considerations

are

important

understanding reflected light.

can be used to provide high contrast.  Geometrical for considerations are important understanding reflected light.

72

23092011

Interaction of Light with transparent surfaces

Interaction of Light with transparent surfaces 73
Interaction of Light with transparent surfaces 73

73

Surface Finish

Surface Finish 75
Surface Finish 75
Surface Finish 75
Surface Finish 75
Surface Finish 75

75

Why Refraction is Important ?

Refraction is the basic principle behind many optical elements

Lenses

Filters

Mirrors and Prisms

Optical elements are not perfect

They do not transmit 100% of the light.

Chromatic Abberations.

Prisms  Optical elements are not perfect  They do not transmit 100% of the light.

74

Complex Geometries

Complex Geometries 76
Complex Geometries 76

76

23092011

How is Light measured ?

Lumen and Lux are photometric parameters that represent the amount of light that falls upon a surface per second.

Bright Sun Light:

100,000 lux

Cloudy day:

10,000 lux

Full moon night:

0.05 lux

Over cast night:

0.00005 lux

The human eye is sensitive to this full range (10 orders of magnitude!) But cameras are only sensitive to 3 orders of magnitude.

sensitive to this full range (10 orders of magnitude!)  But cameras are only sensitive to

77

Lens and the Camera sensor

Lens and the Camera sensor 79
Lens and the Camera sensor 79

79

Lens

Lens The Lens uses refraction to bend light as it passes through, generating image at the

The Lens uses refraction to bend light as it passes through, generating image at the other side.

Lens The Lens uses refraction to bend light as it passes through, generating image at the

78

Specifications used to select Lens for a Machine Vision Application

Focal Length

Angular field of view or magnification

Working Distance or Field of view minimum at focus

Depth of Focus

Aperture

Resolution

Camera Sensor Size

Camera mounting configuration

at focus  Depth of Focus  Aperture  Resolution  Camera Sensor Size  Camera

80

23092011

Focal Length

Focal Length f The focal length is the distance between the optical centre and the image
Focal Length f The focal length is the distance between the optical centre and the image
f
f

The focal length is the distance between the optical centre and the image plane when the lens is focused at infinity.

The focal length is the distance between the optical centre and the image plane when the

81

Field Of View

Field Of View The area imaged or the FOV is determined by the intersection of the

The area imaged or the FOV is determined by the intersection of the stand off distance and the angle of viewing.

View The area imaged or the FOV is determined by the intersection of the stand off

83

82
82

82

Field Of View

Field Of View 84
Field Of View 84
Field Of View 84

84

23092011

Moving closer

Moving closer 85
Moving closer 85
Moving closer 85

85

What to do if you need to change the image size?

To increase magnification (smaller FOV)

Use a lens with longer focal length

Move the camera closer to the part

(To be cautious about distortion and ability to focus)

To decrease magnification (larger FOV)

Use a shorter focal length lens

Move the camera further from the part

To decrease magnification (larger FOV)  Use a shorter focal length lens  Move the camera

87

Focal Length and Stand off Distance

Focal Length and Stand off Distance  A shorter focal length lens can image the same

A shorter focal length lens can image the same field of view as a longer focal length lens by decreasing the stand off distance.

Shorter focal length lens will have more parallax distortion (fish eye effect).

Stand off distance has a larger effect on magnification for short focal length (wide angle) lenses.

effect ).  Stand off distance has a larger effect on magnification for short focal length

86

Focus

Lenses are designed for specific imaging characteristics

Using the lens outside of the design region impacts image quality.

For example, stand off distance for focusing can be reduced using spacer rings.

Depth of focus is also dependent upon aperture or f/stop setting

Wide open aperture: small depth of focus

Small aperture: large depth of focus

aperture or f/stop setting  Wide open aperture: small depth of focus  Small aperture: large

88

23092011

Extension Rings

Extension Rings are used to alter the focusing distance of a lens

The rings increase the image distance, and allow the lens to focus at shorter distances

of a lens  The rings increase the image distance, and allow the lens to focus

Lens Adaptor

of a lens  The rings increase the image distance, and allow the lens to focus

89

An Example

An Example Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4
An Example Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4

Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with

with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4 Captured with a

F/4

a 100-mm lens with F/4 Captured with a 28-mm lens with F/4 Captured with a 100-mm

Captured with a 100-mm lens with F/22 Captured with a 28-mm lens with

F/22

with F/4 Captured with a 28-mm lens with F/4 Captured with a 100-mm lens with F/22

91

Aperture and F/stop (F#)

Aperture and F/stop (F#) F/stop = Focal length / Aperture Diameter Aperture is the clear opening
F/stop = Focal length / Aperture Diameter
F/stop = Focal length / Aperture Diameter

Aperture

is

the

clear

opening

of

the lens

and F/stop (F#) F/stop = Focal length / Aperture Diameter Aperture is the clear opening of
and F/stop (F#) F/stop = Focal length / Aperture Diameter Aperture is the clear opening of

90

LIGHTING 92

LIGHTING

LIGHTING 92
LIGHTING 92

92

23092011

Lighting Concerns

Stability of the light source

Flicker rate

Change in spectral properties

Need to control diffusion of light (bright spots are bad)

Ambient lighting needs to be blocked off

Ambient temperature has very large effect on lighting

Depends on lighting and camera

Relations are non-linear

Ambient temperature has very large effect on lighting  Depends on lighting and camera  Relations

93

Illumination affects the color of the material

Sources have different spectral properties which cause objects to look differently under different sources.

 Sources have different spectral properties which cause objects to look differently under different sources. 94
 Sources have different spectral properties which cause objects to look differently under different sources. 94

94

 

One More Example

 

Characteristics of Light Sources

 Thermal (Incandescent)
 Thermal (Incandescent)

Thermal (Incandescent)

1000 lumens for 75W

5% efficiency (12 – 15 lumens/Watt)

1000 hours

Rs 50/Klumen

Gas Discharge (Fluorescent)

10,000 lumens

25% efficiency (50 lumens/ Watt)

10,000 hours (output degrades, then fails)

 

Rs 25/Klumen

LEDs (Solid state lighting)

30-35 lumens/Watt

100,000 hours (output degrades over time, not hard failure)

Rs 2500/Klumen

95 96

95

95 96

96

23092011

50 Hz Noise Variation in Light Output

Use a high frequency ballast for Fluorescent lights (10 KHz)

Use DC sources for LEDs

Shroud your cell from ambient light if it is bright.

The ambient light source is most likely AC

for LEDs  Shroud your cell from ambient light if it is bright.  The ambient
for LEDs  Shroud your cell from ambient light if it is bright.  The ambient

97

Use Geometry to Meet the Objectives for Lighting Design

Optical Filtering

Highlight Features of Interest

Reduce Extraneous information

Use natural features of the part for contrast

Shadows

Specular Reflections

Design a system that is compatible with process constraints

for contrast  Shadows  Specular Reflections  Design a system that is comp atible with

99

Effect of Operating Temperature

Effect of Operating Temperature 98
Effect of Operating Temperature 98

98

Angle of Lighting depends on the part features

Angle of Lighting depends on the part features 100
Angle of Lighting depends on the part features 100

100

23092011

Lighting Techniques  Back Light  Diffuse  Collimated  Front Light  Diffused 
Lighting Techniques
 Back Light
 Diffuse
 Collimated
 Front Light
 Diffused
 Directed
 Structured
101

Back Lighting Provides the Highest Contrast. But….

Its not always practical to implement

Parts on a conveyor or in a fixture often is difficult to be back lit.

It provides the information about the part’s silhouette

only

Sometimes surface features are the ones we’re interested in

the information about the part’s silhouette only  Sometimes surface features are the ones we’re interested

103

Back Light

Back Light Placing a light behind the part such that the part is between the light
Back Light Placing a light behind the part such that the part is between the light

Placing a light behind the part such that the part is between the light and the camera, providing a silhouette of the part

a light behind the part such that the part is between the light and the camera,

102

Types

Types 104
Types 104

104

23092011

Front Light

Front Light Placing the light in front of the part, on the same side of the

Placing the light in front of the part, on the same side of the camera. Provides an Image with surface features and shading

light in front of the part, on the same side of the camera. Provides an Image

105

Bright Field Illumination Dark Field IIlumination 107
Bright Field Illumination Dark Field IIlumination 107

Bright Field Illumination

Dark Field IIlumination

Bright Field Illumination Dark Field IIlumination 107
Bright Field Illumination Dark Field IIlumination 107
Bright Field Illumination Dark Field IIlumination 107

107

Dark Field Illumination

Dark field illumination is used to subdue background and highlight pin stamped characteristics

Light positioned at an oblique angle to the part.

Angle of incidence set up such that angle of reflection is away from the camera lens.

Any perturbations can reflect light into the camera lens.

angle of reflection is away from the camera lens.  Any perturbations can reflect light into
angle of reflection is away from the camera lens.  Any perturbations can reflect light into
angle of reflection is away from the camera lens.  Any perturbations can reflect light into

106

Lighting Component

Lighting Component 108
Lighting Component 108
Lighting Component 108
Lighting Component 108
Lighting Component 108
Lighting Component 108
Lighting Component 108
Lighting Component 108

108

23092011

109
109
109
109
109
109
109

109

111
111
111
111
111
111

111

110
110
110
110
110
110
110

110

CAMERA SENSORS 112
CAMERA SENSORS 112

CAMERA SENSORS

CAMERA SENSORS 112

112

23092011

Digital Image

A

representation of a real physical object. The objective is to obtain an accurate

a numerical

digital

image

is

spatial (geometric) and spectral

(light)

representation with sufficient (resolution).

detail

Image

sensors

generate

images

by

measuring and recording the light that strikes the sensor surface.

Any Digital Image, irrespective of its type is a 2D array of numbers

the light that strikes the sensor surface. Any Digital Image, irrespective of its type is a
the light that strikes the sensor surface. Any Digital Image, irrespective of its type is a

113

Recording the Field of View

Area Scan

Line Scan

Recording the Field of View  Area Scan  Line Scan 115
Recording the Field of View  Area Scan  Line Scan 115
Recording the Field of View  Area Scan  Line Scan 115

115

Types

Intensity images

Range images

Types  Intensity images  Range images 114
Types  Intensity images  Range images 114

114

Intensity Images – CV Terminologies  Optical parametres  Lens type  Focal length 
Intensity Images – CV Terminologies
 Optical parametres
 Lens type
 Focal length
 FOV
 Photometric parameters
 Intensity
 Direction of illumination
 Reflectance properties
 Sensor’s structure
 Geometric parameters
Types of Projections
Pose of the camera
116

23092011

Getting a good image

What is a good image?

Features of interest are well defined

High contrast with enough detail

Images are repeatable

Features in the image exist in the physical world

No noise or artifacts

Changes in the environment should have minimal impact on the image.

How to achieve this?

Good lighting and optics

Understanding the requirements

Choosing the right camera for the application.

 Good lighting and optics  Understanding the requirements  Choosing the right camera for the

117

Sensor Types based on the Sensing Element Used

Vaccum Diode

Vidicon

Plumbicon

Photo Multiplier Tube (PMT)

Solid State (silicon)

Silicon Photo Diode

Positive Selective Detector (PSD)

Solid State Camera Sensors

CCD

COMS

 Silicon Photo Diode  Positive Selective Detector (PSD)  Solid State Camera Sensors  CCD

119

Properties of Sensors

Some materials will

generate electrical charge

proportional to the

number of photons

striking it.

These materials are used

for image sensors.

electrical charge proportional to the number of photons striking it.  These materials are used for
electrical charge proportional to the number of photons striking it.  These materials are used for

118

Sensor analogy to film

In some ways, a sensor in a video camera is lot like photographic film.

An image is focused on the sensor for a preset exposure time.

The light pattern is captured and transformed into a new medium.

There is a integral relationship between the amount of light measured and exposure time.

into a new medium.  There is a integral relationship between the amount of light measured
into a new medium.  There is a integral relationship between the amount of light measured
into a new medium.  There is a integral relationship between the amount of light measured

120

23092011

Comparison of a sensor to film

Comparison of a sensor to film Film has a continuous surface, down to the g r

Film has a continuous surface, down to the

grain of the film

down to the g r a i n o f t h e f i l
down to the g r a i n o f t h e f i l

Video Sensors have discrete imaging surface

e f i l m Video Sensors have discrete imaging surface Sizes of solid state sensors

Sizes of solid state sensors

2/3 inch: 6.6 x 8.8 mm 1/2 inch: 6.4 x 4.8 mm 1/3 inch: 3.6 x 4.8 mm 1/4 inch: 2.4 x 3.2 mm

of solid state sensors 2/3 inch: 6.6 x 8.8 mm 1/2 inch: 6.4 x 4.8 mm

121

How Sensors Measure Light?

Each photo site can be modeled by a bucket to collect charges generated by photons

As photons strike the sensor, charge is developed and the bucket begins to fill

How full the bucket gets is determined by

How much light (intensity)

How long you collect charge (exposure time or shutter speed)

How efficiently the photons gets converted to charge (spectral response)

(exposure time or shutter speed)  How efficiently the photons gets converted to charge (spectral response)
(exposure time or shutter speed)  How efficiently the photons gets converted to charge (spectral response)

123

How sensors work ?

We can think of camera as imposing a grid over the object being imaged, and sampling the light

The individual square is called a photo site, and is similar to a light meter.

The camera sensor is made up of an array of these photo sites.

The individual photo sites in an video sensor are called picture elements - PIXEL

of these photo sites.  The individual photo sites in an video sensor are called pic
of these photo sites.  The individual photo sites in an video sensor are called pic

122

 The amount of light in each photo site is sampled and converted into a

The amount of light in each photo site is sampled and converted into a number. This number, or gray scale value, is an indicator of brightness.

photo site is sampled and converted into a number. This number, or gray scale value, is

124

23092011

Exposure Time Analogy

How full the bucket gets is dependent upon:

How fast the faucet is running

- light intensity

How long you keep the bucket under the running water -

light intensity  How long you keep the bucket under the running water - - exposure

- exposure time

ANALOGY
ANALOGY
light intensity  How long you keep the bucket under the running water - - exposure

Photons

Electrons

light intensity  How long you keep the bucket under the running water - - exposure

125

How Many Pixels Are Required To Find An Object ?

How Many Pixels Are Required To Find An Object ? 2 x 2 grid 4 photo

2 x 2 grid 4 photo sites or (4 pixels)

When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

fills more than 50% of the photo site, it will be turned black, otherwise the site

127

Images Taken With Different Exposure Time

Images Taken With Different Exposure Time INCREASING EXPOSURE TIME As you increase the exposure time, you
Images Taken With Different Exposure Time INCREASING EXPOSURE TIME As you increase the exposure time, you
Images Taken With Different Exposure Time INCREASING EXPOSURE TIME As you increase the exposure time, you
INCREASING EXPOSURE TIME
INCREASING EXPOSURE TIME

As you increase the exposure time, you allow more time for photons to get converted into electrons in the sensor; hence more charge accumulation for more brighter image.

for photons to get converted into electrons in the sensor; hence more charge accumulation for more

126

Double the Resolution…

Double the Resolution… 4 x 4 grid 16 photo sites or (16 pixels) When the blue

4 x 4 grid 16 photo sites or (16 pixels)

When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

object fills more than 50% of the photo site, it will be turned black, otherwise the

128

23092011

Double it again

Double it again 8 x 8 grid 64 photo sites or (64 pixels) When the blue

8 x 8 grid 64 photo sites or (64 pixels)

When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

object fills more than 50% of the photo site, it will be turned black, otherwise the

129

Other Attributes…

The new digitized information contains much less information

A three dimensional scene is reduced to a 2D representation representation

No color informationA three dimensional scene is reduced to a 2D representation  Size and location are now

Size and location are now estimates whose precision and accuracy depends on the sampling resolution.

information  Size and location are now estimates whose precision and accuracy depends on the sampling

131

Attributes of Sampling

You might not even detect the object if the sampling resolution is too low

detect the object if the sampling resolution is too low  If you sample at two
detect the object if the sampling resolution is too low  If you sample at two

If you sample at two times the resolution, the total number of sample sites is increased by a factor of 4

low  If you sample at two times the resolution, the total number of sample sites
low  If you sample at two times the resolution, the total number of sample sites

130

A close up look at pixels !

A close up look at pixels ! INCREASING ZOOM LEVEL 132
INCREASING ZOOM LEVEL
INCREASING ZOOM LEVEL
A close up look at pixels ! INCREASING ZOOM LEVEL 132
A close up look at pixels ! INCREASING ZOOM LEVEL 132

132

23092011

Sensor Array Configuration

The sensor consists of an array of individual photo cells.

Typical array size in pixels is

640 x 480 or 768 x 480,

1280 x 760

1600 x 1200, and larger

For reference human vision is >100 million pixels

and larger  For reference human vision is >100 million pixels The array size is called

The array size is called PIXEL RESOLUTION

and larger  For reference human vision is >100 million pixels The array size is called

133

Spatial Resolution

Spatial Resolution Spatial Resolution = FOV / No. of Pixels 135
Spatial Resolution = FOV / No. of Pixels
Spatial Resolution = FOV / No. of Pixels
Spatial Resolution Spatial Resolution = FOV / No. of Pixels 135

135

How big is a pixel ? - Resolution

When it comes to resolution, the following distinction is necessary

Number of Pixels in the image

- Camera Sensor Resolution

Usually between 5 and 10 microns

Impact Sensor Noise and Dynamic Range

Number of pixels covering feature

- Spatial Resolution

Impacts robustness of the vision algorithm

Smallest detail captured in the image

- Measurement Accuracy

 Impacts robustness of the vision algorithm  Smallest detail captured in the image - Measurement
 Impacts robustness of the vision algorithm  Smallest detail captured in the image - Measurement
 Impacts robustness of the vision algorithm  Smallest detail captured in the image - Measurement

134

How many pixels should cover the Features of Interest ?

Depends on the application, but in general, more is better

Trade off is that you image less of the scene.

Field of view should be large enough to accommodate variations in position.

Might require more than one camera

Field of view should be la rge enough to accommodate variations in position.  Might require
Field of view should be la rge enough to accommodate variations in position.  Might require
Field of view should be la rge enough to accommodate variations in position.  Might require

136

23092011

What if your camera does not have enough pixels ?

Use sub-pixels

Interpolating between pixel boundaries for sizing or identifying location

Sub pixels are only applicable to measurement, not detection

boundar ies for sizing or identifying location  Sub pixels are only applicable to measurement, not
boundar ies for sizing or identifying location  Sub pixels are only applicable to measurement, not

137

Line Scan Image Example

Line Scan Image Example  Field of View is 30” x 200”  100 dpi 
Line Scan Image Example  Field of View is 30” x 200”  100 dpi 

Field of View is 30” x 200”

100 dpi

3,000 x 20,000 pixel image

60 Mbyte image data

Example  Field of View is 30” x 200”  100 dpi  3,000 x 20,000

139

What If Pixel Arrays Are Not Big Enough And Sub-pixels Won’t Work?

Use Line Scan cameras:A digital camera with pixels arranged in a single line.

Can generate extremely large contiguous images not possible with area scan cameras

1K, 2K, 4K, 8K, 10K are some available sizes

Cost of the line scan sensor is low relative to large format array cameras (2000 x 2000)

Motion of the camera or part is required for the 2 nd axis

Similar to scanners, copiers and fax machines

Can obtain images of continuously moving line (web inspection)

 Similar to scanners, copiers and fax machines  Can obtain images of continuously moving line
 Similar to scanners, copiers and fax machines  Can obtain images of continuously moving line

138

Some More Camera Sensor Parameters

Saturation

Blooming

Dynamic Range

Grayscale Resolution

Dark Current Noise

Fill Factor

 Saturation  Blooming  Dynamic Range  Grayscale Resolution  Dark Current Noise  Fill

140

23092011

Saturation

At certain light levels and exposure times, the bucket (photo site) gets filled with charge and can hold no more.

The photo cell is now saturated

Any additional charge generated by the sensor has to go somewhere

Where does it go?

cell is now saturated  Any additional charge generated by the sensor has to go somewhere
cell is now saturated  Any additional charge generated by the sensor has to go somewhere

141

Dynamic Range

Ratio of Amount of Light it takes to

saturate the sensor to the least amount

of light detectable above background

noise.

A good dynamic range allows very

bright and very dim areas to be viewed

simultaneously

above background noise.  A good dynamic range allows very bright and very dim areas to
above background noise.  A good dynamic range allows very bright and very dim areas to

143

Blooming

When light saturates in a pixel area it spills over into adjacent pixels.

Spill over occurs

Into adjacent pixels

In CCD spillover also occurs in the pixel columns

Prevent blooming by

Avoiding saturation

Cameras with anti blooming circuitry

Blooming causes loss of image data

blooming by  Avoiding saturation  Cameras with anti blooming circuitry  Blooming causes loss of
blooming by  Avoiding saturation  Cameras with anti blooming circuitry  Blooming causes loss of
blooming by  Avoiding saturation  Cameras with anti blooming circuitry  Blooming causes loss of

142

Examples

Low Dynamic Range

Examples Low Dynamic Range High Dynamic Range 144
Examples Low Dynamic Range High Dynamic Range 144

High Dynamic Range

Examples Low Dynamic Range High Dynamic Range 144
Examples Low Dynamic Range High Dynamic Range 144
Examples Low Dynamic Range High Dynamic Range 144

144

23092011

Grayscale Resolution

The number of bits used to represent the amount of light in the pixel

Digitizing to 8 bits gives 256 gray shades 2 = 256

8

of bits used to represent the amount of light in the pixel  Digitizing to 8

145

Photosensitive Area of the Sensor

Photons which fall out of the photo sensitive area do not get converted into electric charge and are not detected by the sensor. This will impact sensitivity to light and the ability to accurately measure between pixels for sub pixel tolerance.

This will impact sensitivity to light and the ability to accurately measure between pixels for sub
This will impact sensitivity to light and the ability to accurately measure between pixels for sub

147

Dark Current Noise

If no photons strike the sensor during the exposure time

No charge is created

The bucket should remain empty

However stray charge gets generated in the silicon from the thermal energy causing low level noise

This charge is called dark current

Result is that black is not 0.0 volts

dark current  Result is that black is not 0.0 volts  Dark current noise increases

Dark current noise increases with temperature, doubles with every 6 degree rise above room temperature.

volts  Dark current noise increases with temperature, doubles with every 6 degree rise above room

146

Fill Factor

Percentage of the pixel area sensitive to light

Circuitry required to read out voltage obscures silicon beneath traces

Coverage can be as low as 30%

Fill factor is shown in some camera specifications

Impacts quality of image

Sensitivity to Light

as 30%  Fill factor is shown in some camera specifications  Impacts quality of image

148

23092011

Fill Factor Considerations with CCD vs. CMOS

Fill Factor Considerations with CCD vs. CMOS CCD CCD Sensor Sensor CCD CCD Sensor Sensor reads

CCD CCD

Sensor Sensor

CCD CCD Sensor Sensor reads reads out out a a single single row row of of

pixels pixels at at a a time, time, after after the the charge charge is is moved moved

down down the the sensor sensor “lock “lock step” step” by by rows rows

CMOS CMOS Active Active Pixel Pixel Sensor Sensor

CMOS sensor has amplifier circuitry on each and every pixel in the array. Pixel values may be readout somewhat randomly.

sensor has amplifier circuitry on each and every pixel in the array. Pixel values may be

149

Color vs. Monochrome

You would have 3x the amount of data to process, or 1/3 the spatial resolution with color imaging

Need to evaluate the benefit of color information relative to increased complexity and reduced resolution

Most machine vision applications use monochrome cameras

Machine vision implemented with color camera are suitable for sorting, nor colorimetry

For robustness, colors being differentiated need to be widely spaced

Watch for uniform spectral output of your light source for color applications (remember that the camera measures the reflected light)

spectral output of your light source for color applications (remember that the camera measures the reflected

151

Advantages and Disadvantages of each Technology

CCD

High quality, low noise imagesAdvantages and Disadvantages of each Technology  CCD Good Pixel-to-Pixel uniformity Electronic Shutter without

Good Pixel-to-Pixel uniformityof each Technology  CCD High quality, low noise images Electronic Shutter without artifacts 100% fill

Electronic Shutter without artifactsquality, low noise images Good Pixel-to-Pixel uniformity 100% fill factor Highest Sensitivity High Power consumption

100% fill factoruniformity Electronic Shutter without artifacts Highest Sensitivity High Power consumption Multiple Voltages

Highest SensitivityElectronic Shutter without artifacts 100% fill factor High Power consumption Multiple Voltages Required Increased

High Power consumptionwithout artifacts 100% fill factor Highest Sensitivity Multiple Voltages Required Increased system integration

Multiple Voltages Required100% fill factor Highest Sensitivity High Power consumption Increased system integration complexity and cost  CMOS

Increased system integration complexity and costHigh Power consumption Multiple Voltages Required  CMOS Low Power consumption Camera functions

CMOS

Low Power consumption

Low Power consumption

Camera functions and additional control circuitry can be implemented in the CMOS sensor chip itself

Camera functions and additional control circuitry can be implemented in the CMOS sensor chip itself

Random pixel read out capability (Windowing)

Random pixel read out capability (Windowing)

Fixed Pattern noise

Fixed Pattern noise

Higher Dark Current Noise

Higher Dark Current Noise

Lower Light Sensitivity

Lower Light Sensitivity

150

150

CAMERA INTERFACE 152
CAMERA INTERFACE 152

CAMERA INTERFACE

CAMERA INTERFACE 152

152

23092011

How Vision Works

Take a picture

Process the image data

Make a decision or measurement

Do something useful with the results

a picture  Process the image data  Make a decision or measurement  Do something

153

Hardware Common to Most Vision Systems

Camera

Sensor, format, interfaces

Processor

Frame Grabber, I/O, interfaces, packaging

Optics

Lenses and Accessories

Lighting

Source and Technique

Other Accessories

Enclosures, cables, power supplies.

Accessories  Lighting  Source and Technique  Other Accessories  Enclosures, cables, power supplies. 155

155

“Standard ” Vision Components

Not everything enclosed in the box is required Computer Frame Grabber Custom Image Processor
Not everything enclosed in the box is required
Computer
Frame Grabber
Custom Image
Processor
” Vision Components Not everything enclosed in the box is required Computer Frame Grabber Custom Image

154

Camera Types based on the Hardware Architecture

PC Based

Smart Camera Hardware Architecture
Smart
Camera
Hardware
Architecture

Embedded

Vision

Camera

based on the Hardware Architecture PC Based Smart Camera Hardware Architecture Embedded Vision Camera 156

156

23092011

In detail

PC-based vision is usually more effective for larger systems

Additional cameras can be very low incremental costs

PC is available for complex image processing or post processing tasks

PC can be used for storing images, collecting process data, programming system updates

Smart Cameras can be cost effective where

Small number of cameras are required

Operation of each smart cameras are independent of others in the cell

Minimal post-processing of data is required

No logic between cameras

Lower end vision algorithms sufficient

EmbeddedVision System provides complete hardware packaging and software integration

solution

sufficient  EmbeddedVision System provides complete hardware packaging and software integration solution 157

157

Signal Flow of Image from Camera to Computer (Digital)

Vision Camera Digital Signal
Vision Camera
Digital Signal

Frame Grabber

MAY BE, or becomes a part of the camera

Frame Grabber MAY BE, or becomes a part of the camera DAC ADC Analog Signal Image
Frame Grabber MAY BE, or becomes a part of the camera DAC ADC Analog Signal Image
DAC
DAC
Frame Grabber MAY BE, or becomes a part of the camera DAC ADC Analog Signal Image
ADC
ADC
Grabber MAY BE, or becomes a part of the camera DAC ADC Analog Signal Image Buffer

Analog Signal

BE, or becomes a part of the camera DAC ADC Analog Signal Image Buffer Digital Serial

Image Buffer

Digital Serial or Parallel Interface
Digital Serial or Parallel
Interface

Digital Image Sensor

a part of the camera DAC ADC Analog Signal Image Buffer Digital Serial or Parallel Interface

159

Signal Flow of Image from Camera to Computer (Analog)

Vision Camera

Analog Signal RS 170 / CCIR
Analog Signal
RS 170 / CCIR
Flow of Image from Camera to Computer (Analog) Vision Camera Analog Signal RS 170 / CCIR

Frame Grabber

Flow of Image from Camera to Computer (Analog) Vision Camera Analog Signal RS 170 / CCIR
Flow of Image from Camera to Computer (Analog) Vision Camera Analog Signal RS 170 / CCIR

Image Buffer

Flow of Image from Camera to Computer (Analog) Vision Camera Analog Signal RS 170 / CCIR

158

Digital Image Sensor

Digital Image Sensor

DAC
DAC

Analog Signal

ADC
ADC
Digital Image Sensor DAC Analog Signal ADC

Bandwidth, Resolution and Frame Rate

a interface protocol is shared by the resolution of the image and the frame rate.

The

bandwidth

of

Frame rate of a camera depends upon the camera interface and also the camera electronics

Frame Rates could go up to a million frames per second.

upon the camera interface and also the camera electronics Frame Rates could go up to a
upon the camera interface and also the camera electronics Frame Rates could go up to a
upon the camera interface and also the camera electronics Frame Rates could go up to a

160

23092011

 

Standards for Digital Interface

 

INTERFACE

BANDWIDTH

COST

CABLE

POWER

CAMERA

CPU

STANDARD

EFFECTIVE

LENGTH

OVER CABLE

AVAILABLITY

USAGE

INTERFACE

CAMERA

250

MBps

1

3

No

4

Low

3

LINK

IEEE 1394

400

Mbps (a),

4

1

Yes

3

Moderate

5

800

Mbps (b)

USB

500

Mbps

5

1

Yes

1

Extensive

2

GigE

 

1Gbps

4

5

No

2

Moderate

5

 

5 – Excellent;

1 - Poor

 
161

161

Camera Parameters and Calibration

Extrinsic parameters

Intrinsic parameters

The process of finding the intrinsic and the extrinsic parameters of a

camera is called camera calibration and it depends on the model

chosen for the camera

the extrinsic parameters of a camera is called camera calibrati on and it depends on the

163

CAMERA CALIBRATION

CAMERA CALIBRATION 162
CAMERA CALIBRATION 162

162

World Coordinate

World Coordinate 164
World Coordinate 164

164

23092011

World Coordinate

World Coordinate 165
World Coordinate 165

165

Ideal Model of a Camera – The Perspective Projection

A point p[x,y] in the image plane is given by

x = f [ X / Z]

y = f [ Y

/ Z]

where p is the image of the point P[X,Y,Z] in world space.

the image plane is given by x = f [ X / Z] y = f
the image plane is given by x = f [ X / Z] y = f

167

Camera Coordinates

Camera Coordinates 166
Camera Coordinates 166

166

An Approximate Model – Scaled Orthographic Model

An approximate linear model. = s

Orthographic Model  An approximate linear model. = s Validity depends on the working distance and
Orthographic Model  An approximate linear model. = s Validity depends on the working distance and

Validity depends on the working distance and the relative depths of objects in the scene

linear model. = s Validity depends on the working distance and the relative depths of objects

168

23092011

Failure of Orthographic Projection – An Example

Failure of Orthographic Projection – An Example 169

169

Software is (just) a TOOL

Remember If u think that the only tool you have in your hand is a Hammer,

“Everything around you tends to look like a Nail”

u think that the only tool you have in your hand is a Hammer, “Everything around

171

MACHINE VISION SOFTWARE 170

MACHINE VISION SOFTWARE

MACHINE VISION SOFTWARE 170

170

Lets First Look At How Humans Process Image Data

Shape

Color

Spatial Relationship

Context

Lets First Look At How Humans Process Image Data  Shape  Color  Spatial Relationship

172

23092011

Human Recognition By Shape and Color

Human Recognition By Shape and Color 173
Human Recognition By Shape and Color 173

173

Spatial Relationship & Context: Can You Read This?

Cna yuo raed this? It dseno’t mtaetr in

wtah oerdr the ltteres in the wrod are,

the olny iproamtnt tihng is taht the frsit

and lsat ltteer be in the rghit pclae.

oerdr the ltteres in the wrod are, the olny iproamtnt tihng is taht the frsit and

175

Color aids in Recognition, But is not Necessary

Color aids in Recognition, But is not Necessary 174
Color aids in Recognition, But is not Necessary 174

174

Limitations of Vision Computers

Just as the camera is no match for Human Vision, we’ll see that the computer cannot even begin to duplicate how the human brain processes the image data.

Small subset of processing algorithms is generally used for industrial vision.

Almost all are based on “a priori information”

Vision not up to the “anything, anywhere” problem

 Almost all are based on “a priori information”  Vision not up to the “anything,

176

23092011

Scene Constraints in Vision

Parts

Discrete parts or endless material (e.g., paper)

Minimum and maximum dimensions

Changes in shape

Description of the features that have to be extracted

Changes of these features concerning error parts and common product variation

Surface finish

Color

Corrosion, oil films, or adhesives

Changes due to part handling

variation  Surface finish  Color  Corrosion, oil films, or adhesives  Changes due to

177

What Vision Computers Do with Images?

Image Processing (Image Enhancement)

Perform mathematical or logical calculations on an image and

convert the image into another image where the pixel have

different values

Image Analysis

Perform mathematical or logical calculations on an image to

extract features which describe the image content in numerical

terms

or logical calculations on an image to extract features which describe the image content in numerical

179

Contd.

Part Presentation (in conveyor)

indexed positioning

continuous movement

If there is more than one part in view, the following topics are important:

number of parts in view

overlapping parts

touching parts

in view, the following topics are important:  number of parts in view  overlapping parts

178

When and Why We Process and Analyze Images

IMAGE ENHANCEMENT

Reduce or eliminate noise

Enhance Information

Subdue unnecessary or

confusing background

information

Make Decision analysis

easier

IMAGE ANALYSIS