Sunteți pe pagina 1din 39

1

CHAPTER I

INTRODUCTION

To monitor and control the liquid level, Displacement of the control value vibration
using DIP techniques implementation using FPGA. And also compare the sample image with
reference image changes occur control signal will generate and to monitor the parameters.

In this method exploits replacing analog sensors like level measurement (floating gate,
touch plate, ultra sonic wave) and speed measurement (tachometer, laser beam based counter
method) and moisture measurement (cylindrical capacitance type).

Transmission error happens for long distance (i.e.: signals from transducer are in
analog form). Transducer themselves many time gives erroneous signals. Decision based on
signal from transducer. Different types of sensors for different parameters (i.e.: SG for
pressure, RDT for temperature, LVDT for displacement). Human fatiaue, sleepness can end up
errors.

CHAPTER II
2

OVERVIEW OF VLSI, DIGITAL IMAGE PROCESSING AND MACHINE VISION

2.1 SURVEYS ON VLSI

2.1.1 VLSI TECHNIQUES

VLSI (Very Large Scale Integration) is the science of integrating millions of transistors
on a silicon chip. VLSI a term describing semiconductor integrated circuits composed of
hundreds of thousands of logic elements or memory cells. This transistor-based circuits into
integrated circuits on a single chip first occurred in the 1980s as part of the semiconductor and
communication technologies that were being developed. VLSI is one of the IC Integration
Techniques. The following list shows the integrations techniques that led the path to VLSI
technique.

SSI – Small Scale Integration, the first integration technique that contains ten
transistors (1960). MSI – Medium Scale Integration, meant a microchip containing hundreds
of transistors (1960s). LSI – Large Scale Integration, meant microchips containing thousands
of transistors (1970s). VLSI (Very Large Scale Integration) meant microchips containing
millions of transistors (1980s). The two main reasons for this technology is Moore’s Law and
design process. For the first time it became possible to fabricate a CPU or even an entire
microprocessor on a single integrated circuit. In 1986 the first one-megabit RAM chips were
introduced, which contained more than one million transistors. Microprocessor chips produced
in 1994 contained more than three million transistors. This step was largely made possible by
the codification of "design rules" for the CMOS technology used in VLSI chips, which made
production of working devices much more of a systematic endeavor . The other two techniques
that have advanced features (like more gates) than VLSI are, ULSI (Ultra) means microchips
3

containing 1 million to 10 millions of transistors."ULSI" is reserved only for cases when it is


necessary to emphasize the chip complexity, e.g., in marketing. GSI (Giant) means microchips
containing transistors 1 billion of transistors.
Advantages of VLSI:
1. High Operating Speed
2. Low Cost
3. Less Power consumption
4. Design flexibility
5. High Productivity
6. Provides more design security

VLSI Applications:

VLSI finds applications in all aspects of life, like Consumer Electronics, Defense,
Computers, Communication, Space, Networking etc. Some of the applications are-Wireless
LAN, Re-Configurable Computing, Wearable Computers, Home networking, Blue Tooth, S-
bus Interface, viz. PCI, Fire wire and USB.
• VLSI is an implementation technology for electronic circuitry - analog or digital.
• It is concerned with forming a pattern of interconnected switches and gates on the
surface of a crystal of semiconductor
• Microprocessors
• Personal computers
• Microcontrollers
• Memory - DRAM / SRAM

2.1.2 Classification of VLSI Techniques:


4

We can classify the VLSI techniques based upon customization of the IC. There is
two type of customization is available.
1. Full custom
2. Semi custom
Full custom – ASIC
Example.: Semi custom – PLD, CPLD, FPGA
1) ASIC (Application Specific Integrated Circuit)
2) CPLD (Complex Programmable Logic Device)
3) FPGA (Field Programmable Gate Array)

1 ASIC (Application Specific Integrated Circuit):

ASIC are silicon chips that have been designed for a specific application. The ASIC
devices cannot be used for the general-purpose system. This integrated circuit designed for a
specific task. Sound cards, modems, and video cards often contain ASICs.
Example: base band processing mobile phone, Chipsets in PCs and MPEG
Encoders/Decoders.

2 PLD (Programmable Logic Device):

A Programmable Logic Device is a device whose logic characteristics can be changed


and manipulated or stored through programming. Some of the PLD techniques are
CPLD,PAL, and PLA. This programmed integrated circuit, have some group of devices
known as PLDs include PROM, Programmable Logic Arrays (PLA), and Programmable
Array Logic/Generic Array Logic (PAL/GAL).
5

3 CPLD (Complex Programmable Logic Devices):

The complex programmable logic devices are by placing an unconnected array of


AND-OR gates in a single chip called a Programmable Logic Device (PLD). The PLD
contained an array of fuses that could be blown open or left closed to connect various inputs to
each AND gate. You could program a PLD with a set of Boolean sum-of-product equations
so it would perform the logic functions you needed in your system. Since the PLDs could be
rewired internally, there was less of a need to change the printed circuit boards, which held
them. CPLD is designed by PAL architecture, so it is behave non-volatile memory. FPGA is
designed by PLA

2.2 SURVEYS ON DIGITAL IMAGE PROCESSING FIELD

We are in the midst of a visually enchanting world, which manifests itself with a
variety of forms and shapes, colors and textures, motion and tranquility. The human
perception has the capability to acquire, integrate, arid interpret all this abundant visual
information around us. It is challenging to impart such capabilities to a machine in order to
interpret the visual information embedded in still images, graphics, and video or moving
images in our sensory world. It is thus important to understand the techniques of storage,
processing, transmission, recognition, and finally interpretation of such visual scenes. In this
book we attempt to provide glimpses of the diverse areas of visual information analysis
techniques. The first step towards designing an image analysis system is digital image
acquisition using sensors in optical or thermal wavelengths. A two dimensional image that is
recorded by these sensors is the mapping of the three-dimensional visual world. The captured
two dimensional signals are sampled and quantized to yield digital images. Sometimes we
receive noisy images that are degraded by some degrading mechanism. One common source
of image degradation is the optical lens system in a digital camera that acquires the visual
6

information. If the camera is not appropriately focused then we get blurred images. Here the
blurring mechanism is the defocused camera. Very often one may come across images of
outdoor scenes that were procured in a foggy environment. Thus any outdoor scene captured
on a foggy winter morning could invariably result into a blurred image. In this case the
degradation is due to the fog and mist in the atmosphere, and this type of degradation is known
as atmospheric degradation. In some other cases there may be a relative motion between the
object and the camera.

Thus if the camera is given an impulsive displacement during the image capturing
interval while the object is static, the resulting image will invariably be blurred and noisy. In
some of the above cases, we need appropriate techniques of refining the images so that the
resultant images are of better visual quality, free from aberrations and noises. Image
enhancement, filtering, and restoration have been some of the important applications of image
processing since the early days of the field Segmentation is the process that subdivides an
image into a number of uniformly homogeneous regions. Each homogeneous region is a
constituent part or object in the entire scene. In other words, segmentation of an image is
defined by a set of regions that are connected and no overlapping, so that each pixel in a
segment in the image acquires a unique region label that indicates the region it belongs to.
Segmentation is one of the most important elements in automated image analysis, mainly
because a t this step the objects or other entities of interest are extracted from an image for
subsequent processing, such as description and recognition. For example, in case of an aerial
image containing the ocean and land, the problem is to segment the image initially into two
parts-land segment and water body or ocean segment. Thereafter the objects on the land part
of the scene need to be appropriately segmented and subsequently classified. After extracting
each segment; the next task is to extract a set of meaningful features such as texture, color, and
shape. These are important measurable entities, which give measures of various properties of
image segments. Some of the texture properties are coarseness, smoothness, regularity, etc.,
while the common shape descriptors are length, breadth, aspect ratio, area, location, perimeter,
compactness, etc. Each segmented region in a scene may be characterized by a set of such
7

features. Finally based on the set of these extracted features, each segmented object is
classified to one of a set of meaningful classes. In a digital image of ocean, these classes may
be ships or small boats or even naval vessels and a large class of water body. The problems of
scene segmentation and object classification are two integrated areas of studies in machine
vision. Expert systems, semantic networks, and neural network-based systems have been
found to perform such higher-level vision tasks quite efficiently. Another aspect of image
processing involves compression and coding of the visual information. With growing demand
of various imaging applications, storage requirements of digital imagery are growing
explosively. Compact representation of image data and their storage and transmission through
communication bandwidth is a crucial and active area of development today. Interestingly
enough, image data generally contain a significant amount of superfluous and redundant
information in their canonical representation. An image compression technique helps to reduce
the redundancies in raw image data in order to reduce the storage and communication
bandwidth.

2.2.1 Image

• Picture, photograph
• Visual data
• Usually two or three dimensional

Digital image:

• An image which is “discredited,”, i.e., defined on a discrete grid


• Two-dimensional collection of light values (or gray values)
8

2.2.2 Image types in MATLAB

White and black images:

They are also called binary images, containing 1 for white and 0 for black.

Grey scale images:

They are also called Intensity Images, containing numbers in the range of 0 to 255 or 0
to 1.

2.3 SURVEYS ON MACHINE VISION FIELD

2.3.1 Machine Vision:

Machine Vision (MV) is the application of computer vision to industry and


manufacturing. Whereas computer vision is the general discipline of making computers see
9

(understand what is perceived visually), machine vision, being an engineering discipline, is


interested in digital input/output devices and computer networks to control other
manufacturing equipment such as robotic arms. Machine Vision is a subfield of engineering
that is related to computer science, optics, mechanical engineering, and industrial automation.
One of the most common applications of Machine Vision is the inspection of manufactured
goods such as semiconductor chips, automobiles, food and pharmaceuticals. Just as human
inspectors working on assembly lines visually inspect parts to judge the quality of
workmanship, so machine vision systems use digital cameras, smart cameras and image
processing software to perform similar inspections.Machine vision systems are programmed to
perform narrowly defined tasks such as counting objects on a conveyor, reading serial
numbers, and searching for surface defects.

Manufacturers favour machine vision systems for visual inspections that require high-
speed, high-magnification, 24-hour operation, and/or repeatability of measurements.
Frequently these tasks extend roles traditionally occupied by human beings whose degree of
failure is classically high through distraction, illness and circumstance. However, humans may
display finer perception over the short period and greater flexibility in classification and
adaptation to new defects and quality assurance policies.Computers do not 'see' in the same
way that human beings are able to. Cameras are not equivalent to human optics and while
people can rely on inference systems and assumptions, computing devices must 'see' by
examining individual pixels of images, processing them and attempting to develop conclusions
with the assistance of knowledge bases and features such as pattern recognition engines.
Although some machine vision algorithms have been developed to mimic human visual
perception, a number of unique processing methods have been developed to process images
and identify relevant image features in an effective and consistent manner. Machine vision and
computer vision systems are capable of processing images consistently, but computer-based
10

image processing systems are typically designed to perform single, repetitive tasks, and
despite significant improvements in the field, no machine vision or computer vision system
can yet match some capabilities of human vision in terms of image comprehension, tolerance
to lighting variations and image degradation, parts' variability etc.

2.3.2 Components of a Machine Vision System:

A typical machine vision system will consist of several among the following components:

1. One or more digital or analog camera (black-and-white or color) with suitable optics
for acquiring images
2. Camera interface for digitizing images (widely known as a "frame grabber")
3. A processor (often a PCor embedded processor, such as a DSP)
4. (In some cases, all of the above are combined within a single device, called a smart
camera).
5. Input/Output hardware (e.g. digital I/O) or communication links (e.g. network
connection or RS-232) to report results
6. Lenses to focus the desired field of view onto the image sensor.
7. Suitable, often very specialized, light sources (LEDilluminators, fluorescent or halogen
lamps etc.)
8. A program to process images and detect relevant features.
9. A synchronizing sensor for part detection (often an optical or magnetic sensor) to
trigger image acquisition and processing.

The sync sensor determines when a part (often moving on a conveyor) is in position to be
inspected. The sensor triggers the camera to take a picture of the part as it passes beneath the
camera and often synchronizes a lighting pulse to freeze a sharp image. The lighting used to
illuminate the part is designed to highlight features of interest and obscure or minimize the
11

appearance of features that are not of interest (such as shadows or reflections). LED panels of
suitable sizes and arrangement are often used to this purpose.The camera's image is captured
by the framegrabber or by computer memory in PC based systems where no framegrabber is
utilized.

A framegrabber is a digitizing device (within a smart camera or as a separate computer


card) that converts the output of the camera to digital format (typically a two dimensional
array of numbers, corresponding to the luminous intensity level of the corresponding point in
the field of view, called pixel ) and places the image in computer memory so that it may be
processed by the machine vision software.The software will typically take several steps to
process an image. Often the image is first manipulated to reduce noise or to convert many
shades of gray to a simple combination of black and white (binarization). Following the initial
simplification, the software will count, measure, and/or identify objects, dimensions, defects
or other features in the image. As a final step, the software passes or fails the part according to
programmed criteria. If a part fails, the software may signal a mechanical device to reject the
part; alternately, the system may stop the production line and warn a human worker to fix the
problem that caused the failure.Though most machine vision systems rely on black-and-white
cameras, the use of colour cameras is becoming more common. It is also increasingly common
for Machine Vision systems to include digital camera equipment for direct connection rather
than a camera and separate framegrabber, thus reducing signal degradation."Smart" cameras
with built-in embedded processors are capturing an increasing share of the machine vision
market. The use of an embedded (and often very optimized) processor eliminates the need for
a framegrabber card and external computer, thus reducing cost and complexity of the system
while providing dedicated processing power to each camera. Smart cameras are typically less
expensive than systems comprising a camera and a board and/or external computer, while the
increasing power of embedded processors and DSPs is often providing comparable or higher
performance and capabilities than conventional PC-based systems.
12

2.3.3 Processing Methods:

Commercial and open source machine vision software packages typically include a number of
different image processing techniques such as the following:

• Pixel counting: counts the number of light or dark pixels


• Thresholding: converts an image with gray tones to simply black and white
• Segmentation: used to locate and/or count parts
• Blob discovery & manipulation: inspecting an image for discrete blobs of connected
pixels (e.g. a black hole in a grey object) as image landmarks. These blobs
frequently represent optical targets for machining, robotic capture, or manufacturing
failure.
• Recognition-by-components: extracting geons from visual input
• Robust pattern recognition: location of an object that may be rotated, partially hidden
by another object, or varying in size
• Barcode reading: decoding of 1D and 2D codes designed to be read or scanned by
machines
• Optical character recognition: automated reading of text such as serial numbers
• Gauging: measurement of object dimensions in inches or millimeters
• Edge detection: finding object edges
• Template matching: finding, matching, and/or counting specific patterns

In most cases, a machine vision system will use a sequential combination of these processing
techniques to perform a complete inspection. E.g. A system that reads a barcode may also
check a surface for scratches or tampering and measure the length and width of a machined
component.
13

2.3.4 Applications of machine vision

The applications of machine vision (MV) are diverse, covering areas of endeavour including,
but not limited to:

• Large-scale industrial manufacture


• Short-run unique object manufacture
• Safety systems in industrial environments
• Inspection of pre-manufactured objects (e.g. quality control, failure investigation)
• Visual stock control and management systems (counting, barcode reading, store
interfaces for digital systems)
• Control of automated guided vehicles (AGVs)
• Quality control and refinement of food products

CHAPTER-III

3 LITERATURE SURVEY AND EXISTING SYSTEMS

FPGA Implementation For Humidity And Temperature Remote Sensing System [1]
14

This paper present the hardware design and implementation of a remote sensing system
for humidity and temperature in real time .The design based on using FPGA (Field
Programmable Gate Array) fro the hardware implementation of the controller circuit and GSM
(Global System for Mobile) for remote monitoring. The controller circuit has been describing
using VHDL (VHSIC Hardware Description Language). The design has been simulated using
Modelsim from model technology and implemented using Xilinx ISE 6.2I software tools.
FPGA Spartan 3E starter kit from digiolent has been used for the hardware implementation of
the controller circuit. The system offers low cost and user-friendly way of 24 hours real-time
remote monitoring for temperature and humidity using SMS (Short Messaging Service)
message

FPGA Implementation of High-Speed Stereo Matching Processor Based on Recursive


Computation [2]

This paper presents processor architecture for high-speed stereo matching based on
SAD (Sum of Absolute Difference) computation. To reduce its computational complexity, an
hardware-oriented algorithm exploiting common intermediate results between Sods is
proposed. When designing the image processor, one most vertical issue is to find the
scheduling that reduces the data transfer amount between the external image memory and the
on-chip local memory modules while maintaining the degree of parallelism. For the purpose,
they propose the scheduling that exploits disparity-level parallelism although conventional
scheduling approaches exploits window-level parallelism .The result of the FPGA
Implementation shows that the frame rate of 200 frame/sec is achieved by using 64 PES
implemented on a single on a single ALTERA stratrix EPIS40F1020C7 at 80MHZ for VGA
image

A Convolver-Based Real-Time Stereo Machine (SAZAN) [3]


15

For 30 reconstructions, polynocular stereo based on multiple image fusion is a


promising method. We developed a convolver-based nine-eye stereo machine called S A M.
It performs real-time acquisition of dense depth map at 20 MDPS (Mil1ion Depth-pixels Per
Second}. The reduction of matching ambiguities, which is the most crucial part in stereo
Matching, is electively performed by filtering operations of 2 0 convolver LSI. Several new
ideas and capabilities including a nonlinear data reduction of LOG outputs, an efficient
geometric calibration and a sub pixel disparity are also implemented in hardware. Considering
the hardware size and the various factors that have an influence on the final processing quality,
the computational performance is compared with existing stereo systems Including the CMU
stereo machine.

VLSI Processor for Reliable Stereo Matching Based on Window-Parallel Logic-in-


Memory Architecture [4]

This paper presents a VLSI processor for reliable stereo matching to establish
correspondence between images by selecting a desirable window size for sum of absolute
differences (SAD) computation. In SAD computation, a degree of parallelism between pixels
in a window changes depending on its window size, while a degree of parallelism between
windows is predetermined by the input-image size. Based on this consideration, a window-
parallel and pixel serial architecture is also proposed to achieve 100% utilization of processing
elements. Not only 100% utilization but also a simple interconnection network between
memory modules and processing elements makes the VLSI processor much superior to the
pixel-parallel-architecture-based VLSI processors.

Improvements in Real-Time Correlation-Based Stereo Vision [5]


16

A stereovision system that is required to support high-level object based tasks in a tele-
operated environment is described. Stereovision is computationally expensive, due to having
to find corresponding pixels. Correlation is a fast, standard way to solve the correspondence
problem. This paper analyses the behavior of correlation-based stereo to find ways to improve
its quality while maintaining its real-time suitability. Three methods are suggested. Two of
them aim to improve the disparity image especially at depth discontinuities, while one targets
the identification of possible errors in general. Results are given on real stereo images with
ground truth. A comparison with five standard correlations Methods shows that improvements
of simple stereo correlation are possible in real-time on current computer hardware.

Cost/Performance Tradeoff of n-Select Square Root Implementations [6]

Hardware square-root units require large numbers of gates even for iterative
implementations. In this paper, we present four low-cost high-performance fully pipelined n-
select implementations (nS-Root) based on a non-restoring-remainder square root algorithm.
The nSRoot uses a parallel array of carry-save adders (CSAs). For a square root bit
calculation, a CSA is used once. This means that the calculations can be fully pipelined. It also
Uses the n-way root-select technique to speedup the square root calculation. The
cost/performance evaluation shows that n=2 or n=2.5 is a suitable solution for designing a
High-speed fully pipelined square root unit while keeping the low-cost.

2.5 EXISTING SYSTEMS

Acquisition of reliable three-dimensional (3-D) images of a real scene plays an


essential role in real-world intelligent systems such as intelligent robots and highly safe
vehicles. Stereovision is a well-known method to acquire three-dimensional information. The
important problem on stereovision is to establish reliable correspondence between images.
Another problem is that the correspondence search is time-consuming even if state of-art
17

general-purpose processors are used to accelerate the corresponding search. One commonly
used method to establish correspondence between images is the SAD (Sum of Absolute
Differences) method. Given a pixel L in one image (reference image), an SAD is computed
between a rectangular window centered at L and a candidate window of each possible location
in another image (candidate image).In usual cases, the window sizes are empirically pre-
determined, and the candidate pixel with the smallest SAD is determined to be the
corresponding pixel of L. The major problem on the SAD-based matching is that a window
size for SAD computation must be large enough to avoid ambiguity but small enough to avoid
the effects of projective distortions To solve the problem, this paper proposes a VLSI oriented
stereo matching algorithm with variable window sizes. The method is mainly based on an idea
that an SAD graph has a unique and clear minimum at the reliable matching pixel. A window
size is iteratively enlarged to select as small a window for each pixel as possible that can avoid
ambiguity based on uniqueness of a minimum of an SAD graph. In designing its VLSI
processor, the major consideration is to achieve high utilization of processing elements (PES)
for SAD computation. In SAD computation, a degree of parallelism between pixels in a
window changes depending on its window size. Pixel-parallel SAD computation results in low
utilization since many PES may not be utilized for a small window size. To solve this
problem, an SAD is computed in a pixel-serial manner where a single. Absolute difference
(AD) is computed in each control step. The regular data flow of the pixel-serial computation
makes it possible to fully utilize a PE for SAD computation. Moreover, in a correspondence
search, a degree of window-level parallelism is predetermined by an image width. Therefore,
candidate windows of the equal number are assigned to each PE in advance so that PES is
fully utilized. The VLSI processor is implemented in a TSMC 0.18pm CMOS process

BLOCK DIAGRAM:

IMAGE MEMORY
18

SLICE2

PE1 PE2 PE3

SAD2MIN

MEMORY FOR INTERMEDIATE


RESULT

Fig 3.5.1 Overall Architecture


AD- AD-
REG REG

SUB-REG

MEMORY
MODULE
PROCESSING ELEMENT BLOCK DIGARAM:
ADD-
R1 REG
C1 R2 C2

SUB-
REG

REG REG REG REG

ADD-
REG
19

SLICE

SHIFT REGISTER

SAD

Fig –3.5.2 Block Diagram of PE

DETAIL PE BLOCK DIAGRAM:


20

Fig 3.5.3 Detail structure of PE

3.1.1 Processing Element:

The major issue in designing the SAD unit is to design a compact and high-speed PE
since the PEs occupy the most of the chip area. The fig shown the detailed structure of the PE.
Pipeline registers allows overlapping computation of an AD and addition. In order to compute
SADs using 256-level gray-scale image and the maximum window size 15× 15,an 18-bit AD
circuit and an 16-bit adder is used in the PE. The AD circuit consists of an 8-bit subtractor
and 8 Exclusive or (XOR) gate. When the borrow of the subtractor is once that is the
21

subtractor result is negative, its one’s complement is obtained by XOR-ing the resulting bits
and the borrow and a two’s complement of the subtraction result can be obtained by using the
borrow as an input carry of the least significant bit of the 16-bit adder. for area-efficient and
high-speed design, the 16-bit adder consists of an 8-bit adder and a 8-bit increment .the 8-bit
adder is used to add lower-order 8-bit since the output of the AD circuit has 8-bit by one .The
adder and the incremented operator simultaneously .when the output carry of the adder is
determined, it is used to select the proper output. An incremented is more compact and faster
than that of an adder with the same word length.

CHAPTER IV
22

4 MODIFIDED SYSTEM:

GENERAL BLOCK DIAGARAM:

CONTROL
SIGNAL FPGA RS-232 CCD-
(IMAGE CAMERA
COMPARI
SION)

Fig.3.1 General block diagram of level, displacement and


Vibration monitoring & control

4.1.1 Image Acquisition:

In order to acquire a digital image a physical device sensittve to a band in the


electromagnetic energy spectrum is reauired.This device converts the light(x-ray,
ultraviolet,visible, or infrared) information into coresponding electrical signal.In order to
convert this electrical signal into digital signal another devices called digitizer is
employed.among the many device most frequently used ,device to sense the visible and
infrared lights are microdensitometer,vidicon camera, and solid-state arrays

4.1.2 RS232:
23

In telecommunications, RS-232 (Recommended Standard 232) is a standard for serial


binary data signals connecting between a DTE (Data Terminal Equipment) and a DCE (Data
Circuit-terminating Equipment). It is commonly used in computer serial ports. A similar ITU-
T standard is V.24.

4.1.3 Scope of the standard

The Electronics Industries Association (EIA) standard RS-232-C as of 1969 defines:

• Electrical signal characteristics such as voltage levels, signaling rate, timing and slew-
rateof signals, voltage withstand level, short-circuit behavior, and maximum load
capacitance.
• Interface mechanical characteristics, pluggable connectors and pin identification.
• Functions of each circuit in the interface connector.
• Standard subsets of interface circuits for selected telecom applications.

The standard does not define such elements as

• character encoding (for example, ASCII, Baudot code or EBCDIC)


• the framing of characters in the data stream (bits per character, start/stop bits, parity)
• protocols for error detection or algorithms for data compression

Details of character format and transmission bit rate are controlled by the serial port
hardware, often a single integrated circuit called a UART that converts data from parallel
to asynchronous start-stop serial form. Details of voltage levels, slew rate, and short-circuit
behavior are typically controlled by a line-driver that converts from the UART's logic
levels to RS-232 compatible signal levels, and a receiver that converts from RS-232
compatible signal levels to the UART's logic levels.
24

4.1.4 Limitations of the Standard

Because the application of RS-232 has extended far beyond the original purpose of
interconnecting a terminal with a modem, successor standards have been developed to address
the limitations. Issues with the RS-232 standard include:

• The large voltage swings and requirement for positive and negative supplies increases
power consumption of the interface and complicates power supply design. The voltage
swing requirement also limits the upper speed of a compatible interface.
• Single-ended signaling referred to a common signal ground limits the noise immunity
and transmission distance.
• Multi-drop connection among more than two devices is not defined. While multi-drop
"work-arounds" have been devised, they have limitations in speed and compatibility.
• Asymmetrical definitions of the two ends of the link make the assignment of the role
of a newly developed device problematic; the designer must decide on either a DTE-
like or DCE-like interface and which connector pin assignments to use.
• The handshaking and control lines of the interface are intended for the setup and
takedown of a dial-up communication circuit; in particular, the use of handshake lines
for flow control is not reliably implemented in many devices.
• No method is specified for sending power to a device. While a small amount of current
can be extracted from the DTR and RTS lines, this is only suitable for low power
devices such as mice.
• While the standard recommends a 25-way connector and its pinout, the connector is
large by current standards.

Role in Modern Personal Computers


25

RS-232 is gradually being replaced in personal computers by USB for local


communications. Compared with RS-232, USB is faster, uses lower voltages, and has
connectors that are simpler to connect and use. Both standards have software support in
popular operating systems. USB is designed to make it easy for device drivers to communicate
with hardware. However, there is no direct analog to the terminal programs used to let users
communicate directly with serial ports. USB is more complex than the RS-232 standard
because it includes a protocol for transferring data to devices. This requires more software to
support the protocol used. RS-232 only standardizes the voltage of signals and the functions of
the physical interface pins. Serial ports of personal computers are also often used to directly
control various hardware devices, such as relays or lamps, since the control lines of the
interface could be easily manipulated by software.

This isn't feasible with USB, which requires some form of receiver to decode the serial
data.As an alternative, USB docking ports are available which can provide connectors for a
keyboard, mouse, one or more serial ports, and one or more parallel ports. Corresponding
device drivers are required for each USB-connected device to allow programs to access these
USB-connected devices as if they were the original directly-connected peripherals. Devices
that convert USB to RS-232 may not work with all software on all personal computers and
may cause a reduction in bandwidth along with higher latency.

4.1.5 Standard details


26

In RS-232, user data is sent as a time-series of bits. Both synchronous and


asynchronous transmissions are supported by the standard. In addition to the data circuits, the
standard defines a number of control circuits used to manage the connection between the DTE
and DCE. Each data or control circuit only operates in one direction, that is, signaling from a
DTE to the attached DCE or the reverse. Since transmit data and receive data are separate
circuits, the interface can operate in a fullduplex manner, supporting concurrent data flow in
both directions. The standard does not define character framing within the data stream, or
character encoding.

4.1.5. 1Connectors

RS-232 devices may be classified as Data Terminal Equipment (DTE) or Data


Communications Equipment (DCE); this defines at each device which wires will be sending
and receiving each signal. The standard recommended but did not make mandatory the D-
subminiature 25 pin connector. In general and according to the standard, terminals and
computers have male connectors with DTE pin functions, and modems have female
connectors with DCE pin functions. Other devices may have any combination of connector
gender and pin definitions. Many terminals were manufactured with female terminals but were
sold with a cable with male connectors at each end; the terminal with its cable satisfied the
recommendations in the standard.Presence of a 25 pin D-sub connector does not necessarily
indicate an RS-232-C compliant interface. For example, on the original IBM PC, a male D-sub
was an RS-232-C DTE port (with a non-standard current loop interface on reserved pins), but
the female D-sub connector was used for a parallel Centronics printer port. Some personal
computers put non-standard voltages or signals on some pins of their serial ports.

4.2 Charge-coupled device:

A specially developed CCD used for ultraviolet imaging in a wire bonded package.
27

A charge-coupled device (CCD) is a device for the movement of electrical charge, usually
from within the device to an area where the charge can be manipulated, for example
conversion into a digital value. This is achieved by "shifting" the signals between stages
within the device one at a time. Technically, CCDs are implemented as shift registers that
move charge between capacitive bins in the device, with the shift allowing for the transfer of
charge between bins.Often the device is integrated with a sensor, such as a photoelectric
device to produce the charge that is being read, thus making the CCD a major technology
where the conversion of images into a digital signal is required. Although CCDs are not the
only technology to allow for light detection, CCDs are widely used in professional, medical,
and scientific applications where high-quality image data is required.

4.2.1 Basics of operation

The charge packets (electrons, blue) are collected in potential wells (yellow) created by
applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate
electrode in the correct sequence transfers the charge packets.In a CCD for capturing images,
there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out
of a shift register (the CCD, properly speaking).An image is projected through a lens onto the
capacitor array (the photoactive region), causing each capacitor to accumulate an electric
charge proportional to the light intensity at that location. A one-dimensional array, used in
line-scan cameras, captures a single slice of the image, while a two-dimensional array, used in
video and still cameras, captures a two-dimensional picture corresponding to the scene
projected onto the focal plane of the sensor. Once the array has been exposed to the image, a
control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift
register). The last capacitor in the array dumps its charge into a charge amplifier, which
converts the charge into a voltage. By repeating this process, the controlling circuit converts
the entire contents of the array in the semiconductor to a sequence of voltages, which it
samples, digitizes, and stores in memory.
28

4.2.2 Detailed physics of operation

The photoactive region of the CCD is, generally, an epitaxial layer of silicon. It has a
doping of p+ (Boron) and is grown upon a substrate material, often p++. In buried channel
devices, the type of design utilized in most modern CCDs, certain areas of the surface of the
silicon are ion implanted with phosphorus, giving them an n-doped designation. This region
defines the channel in which the photogenerated charge packets will travel. The gate oxide, i.e.
the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later on in the
process polysilicon gates are deposited by chemical vapor deposition, patterned with
photolithography, and etched in such a way that the separately phased gates lie perpendicular
to the channels.

One should note that the clocking of the gates, alternately high and low, will forward
and reverse bias to the diode that is provided by the buried channel (n-doped) and the epitaxial
layer (p-doped). This will cause the CCD to deplete, near the p-n junction and will collect and
move the charge packets beneath the gates—and within the channels—of the device.
It should be noted that CCD manufacturing and operation can be optimized for different uses.
The above process describes a frame transfer CCD. While CCDs may be manufactured on a
heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have
been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and
infrared and red response. This method of manufacture is used in the construction of interline
transfer devices.

4.2.3 Architecture
29

The CCD image sensors can be implemented in several different architectures. The
most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of
each of these architectures is their approach to the problem of shuttering.In a full-frame
device, all of the image area is active, and there is no electronic shutter. A mechanical shutter
must be added to this type of sensor or the image smears as the device is clocked or read out.
With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically
aluminium). The image can be quickly transferred from the image area to the opaque area or
storage region with acceptable smear of a few percent. That image can then be read out slowly
from the storage region while a new image is integrating or exposing in the active area.
Frame-transfer devices typically do not require a mechanical shutter and were a common
architecture for early solid-state broadcast cameras. The downside to the frame-transfer
architecture is that it requires twice the silicon real estate of an equivalent full-frame device;
hence, it costs roughly twice as much.

4.2.4 Sensor sizes

Sensors (CCD / CMOS) are often referred to with an imperial fraction designation
such as 1/1.8" or 2/3", this measurement actually originates back in the 1950s and the time of
Vidicon tubes. Compact digital cameras and Digicams typically have much smaller sensors
than a Digital SLR and are thus less sensitive to light and inherently more prone to noise.
Some examples of the CCDs found in modern cameras can be found in this table in a Digital

4.3 DISTANCE MATRIX ALGORITHM:

FORMULA:
30

(1)
Where
δ =Difference,
Yi=standard image,
Fi=field image.

Distance Measure:

The distance between any two pixels in a given image can be given by three different
type of measures and they are
1.Euclidian Distance
2.D4 Distance and
3.D8 Distance

1.Euclidian Distance

The Euclidian distance between p and q is defined as


De (p, q)=((x1 –x2) 2+(y1-y2) 2)1/2 (2)
Where (x2, y2) and (x2, y2) are the coordinates of the pixel p and q, respectively.

2.D4 Distance and


31

The D4 distance also called as city-blocking distance between p and q is defined


as D4 (p, q)=(x1 –x2)+(y1-y2)  (3)

3.D8 Distance

The D8 distance also called chessboard distance between p and q is defined as


D8 (p, q)=max ((x1 –x2), (y1-y2) ) (4)
The measuring of distance between pixel p and q.

4.4 ROOT IMPLEMENTATION:

4.4.1 1S-Root Implementation


32

The 1S means that there is no other choice. We can consider the 1S-Root as the basis
of the nS-Root implementations. Except for the first-time iteration, the non-restoring
remainder algorithm can be presented as below. If Qi = 1, ri+1 = 4ri − (4qi + 1), else ri+1 = 4ri
+ (4qi + 3). The first-time iteration always subtracts 1 from 4r0. Because if Qi = 1, qi = 2qi−1
+ 1, otherwise qi =2qi−1 + 0, the algorithm turns to: if Qi = 1, ri+1 = 4ri − (8qi−1 +5), else
ri+1 = 4ri +(8qi−1 +3). For any binary numbers u and v, u − v = u + (−v) = u + v + 1, we can
replace 4ri − (8qi−1 + 5) with 4ri + (8qi−1 + 3). We get a new presentation of the algorithm as
below.
1. R0 = D × 2−32, q0 = 0, r1 = 4r0 + (−1);
2. If r1 _ 0, q1 = Q1 = 1, else q1 = Q1 = 0;
3. For i = 1 to 15 do If Qi = 1, ri+1 = 4ri + (8qi−1 + 3); else ri+1 = 4ri + (8qi−1 + 3);
If ri+1 _ 0, qi+1 = 2qi + 1; else qi+1 = 2qi + 0; 4. If r16 < 0, r16 = r16 + (2q16 + 1);
Fig.4.4.1 illustrates the square root calculations by using a parallel CSA array. The ith partial
remainder ri is pre-sented by two groups of data, carry bits (Bij) and sum bits (Aij). The Qij
means Qj _ Qi that implements qi or qi: if Qi = 1, Qij = Qj, else Qij = Qj. Notice that Qi0
=Qi.Because the 011 is always added to the lowest three bits of partial remainder, the Bij for j
= i − 1, i, i + 1, i + 2 and Aij for j = i, i + 1, i + 2 can be simplified as shown in the figure. The
concept diagram of the figure is shown in Fig 4.4.2. We will use such figure style in the
following discussion.
33

Fig 4.4.1 1s Root Implementation calculation

Fig 4.4.2 1s-root implementation


34

The input of the circuit is the radicand D and the output is the square root Q. The
outputs of the CSA at stage I are named with Aij (sum bit) and Bij−1 (carry bit) for j =1, 2, ...,
i − 1.The partially developed square root qi = Q1Q2...Qi has i bits. Therefore the partial
remainder ri should be i + 1 bits. In order to check the sign of ri, it is needed to calculate ri
with only i+2 bits. Here we can use the carry-look ahead circuit to determine Qi.

Qi=Ai1⊕Bi1⊕(Gi 2 +pi2Gi3+……….+ pi2 pi3… pii-3 Gi1-2+0+ pi2 pi3…. (5)


pii-2 Aii-1D2i-2 (D2i-2+ D2i))

Where Gij = AijBij and Pij = Aij + Bij. The circuit for generating a bit of resulting value is
simpler than a carry look ahead adder (CLA) because the CLA needs to generate all of the
carry bits for fast addition, but here, it requires to generate only a single carry bit. We can use
a special technology to speed the carry look a head circuit. It was developed by Rowen,
Johnson, and Rise and used in MIPS R3010 floating point coprocessor for divider’s quotient
logic, fraction zero-detector, and others. By using this technology, the Qi can be obtained with
four-level gates, i.e., two times compared to CSA (the CSA is implemented with two-level
gates)

BLOCK DAIGRAM:

Fig 4.4.3 1.5 root implementation


35

1.5S Implementation:

In the ith iteration, the computation of ri depends on Qi−1. There are two-case
computations: If Qi−1 = 1,ri = 4ri−1 + (8qi−2 + 3), else ri = 4ri−1 + (8qi−2 +3). The Qi−1 is
derived from ri−1. Actually, the two-case computations of ri can be started in parallel
immediately after the ri−1 is known. Two-case (Qi−1 = 1 and Qi−1 = 0)
Computations are performed simultaneously. The results 12 are labeled with r1i and r0i
respectively. After Qi−1 is ready, we use a multiplexor to select a correct partial remainder (ri
in Fig.).The time required by CSAs is hidden because the additions and the Q’s generation are
performed in parallel (Q’sgeneration needs longer time than the addition). But here we used
multiplexors, which will introduce new delays. For the CSAs, only the carry out generation
needs to be duplicated, the sum generation (s = a _ b _ c) does not need to be duplicated
because a_b_c = s. This will save CSA more than 50% space. In the implementation in Fig
there still is only one root we can choose, but the number of CSAs is increased. We call it
1.5S-Root implementation.
36

CHAPTER V

CONCLUSION

In this project presents a new methodology for designing high speed image
comparison by using FPGA and reduced the comparison time. If we comparing the image by
based on threshold value in MATLAB and other technique it will take more time but our
methodology guarantees to reduced the time by comparing the image.

In the second phase of the project coding will be written and implemented
through simulation.
37

RESULT

BINARY VALUE OUTPUT OF IMAGE


38

PLAN OF ACTION FOR PHASE II

S.NO DURATION WORK DONE

DEC’09 To Middle of jan’10 Study of simulation tool


1
MATLAB / Xilinx and FPGA
kit

Middle of jan’10 To feb’10 Design a Architecture and


2
Programming

March’10 FPGA Implementation


3

April’10 Report Preparation


4
39

REFERENCES

1.Wael M EL-Medany,”FPGA Implementation for Humidity and Temperature Remote


sensing system”, 2008 IEEE Transaction of Machine Vision.

2.Masanori Hariyama”FPGA Implementation of a High-speed stereo Matching processor


based on recursive computation”, Int’I conf. Reconfigurable system and algorithms, 2009.

3. M.Z.brown”Advances in computational stereo” by the IEEE Computer Society, 2003.

4.H.Hirschmuller”Improvements in real-time correlation-based stereo vision”, 2001 IEEE


Transaction.

5. S.kimura”A convolver-based real-time stereo machine (sazan)”, 1999 Transaction.

S-ar putea să vă placă și