Documente Academic
Documente Profesional
Documente Cultură
CHAPTER I
INTRODUCTION
To monitor and control the liquid level, Displacement of the control value vibration
using DIP techniques implementation using FPGA. And also compare the sample image with
reference image changes occur control signal will generate and to monitor the parameters.
In this method exploits replacing analog sensors like level measurement (floating gate,
touch plate, ultra sonic wave) and speed measurement (tachometer, laser beam based counter
method) and moisture measurement (cylindrical capacitance type).
Transmission error happens for long distance (i.e.: signals from transducer are in
analog form). Transducer themselves many time gives erroneous signals. Decision based on
signal from transducer. Different types of sensors for different parameters (i.e.: SG for
pressure, RDT for temperature, LVDT for displacement). Human fatiaue, sleepness can end up
errors.
CHAPTER II
2
VLSI (Very Large Scale Integration) is the science of integrating millions of transistors
on a silicon chip. VLSI a term describing semiconductor integrated circuits composed of
hundreds of thousands of logic elements or memory cells. This transistor-based circuits into
integrated circuits on a single chip first occurred in the 1980s as part of the semiconductor and
communication technologies that were being developed. VLSI is one of the IC Integration
Techniques. The following list shows the integrations techniques that led the path to VLSI
technique.
SSI – Small Scale Integration, the first integration technique that contains ten
transistors (1960). MSI – Medium Scale Integration, meant a microchip containing hundreds
of transistors (1960s). LSI – Large Scale Integration, meant microchips containing thousands
of transistors (1970s). VLSI (Very Large Scale Integration) meant microchips containing
millions of transistors (1980s). The two main reasons for this technology is Moore’s Law and
design process. For the first time it became possible to fabricate a CPU or even an entire
microprocessor on a single integrated circuit. In 1986 the first one-megabit RAM chips were
introduced, which contained more than one million transistors. Microprocessor chips produced
in 1994 contained more than three million transistors. This step was largely made possible by
the codification of "design rules" for the CMOS technology used in VLSI chips, which made
production of working devices much more of a systematic endeavor . The other two techniques
that have advanced features (like more gates) than VLSI are, ULSI (Ultra) means microchips
3
VLSI Applications:
VLSI finds applications in all aspects of life, like Consumer Electronics, Defense,
Computers, Communication, Space, Networking etc. Some of the applications are-Wireless
LAN, Re-Configurable Computing, Wearable Computers, Home networking, Blue Tooth, S-
bus Interface, viz. PCI, Fire wire and USB.
• VLSI is an implementation technology for electronic circuitry - analog or digital.
• It is concerned with forming a pattern of interconnected switches and gates on the
surface of a crystal of semiconductor
• Microprocessors
• Personal computers
• Microcontrollers
• Memory - DRAM / SRAM
We can classify the VLSI techniques based upon customization of the IC. There is
two type of customization is available.
1. Full custom
2. Semi custom
Full custom – ASIC
Example.: Semi custom – PLD, CPLD, FPGA
1) ASIC (Application Specific Integrated Circuit)
2) CPLD (Complex Programmable Logic Device)
3) FPGA (Field Programmable Gate Array)
ASIC are silicon chips that have been designed for a specific application. The ASIC
devices cannot be used for the general-purpose system. This integrated circuit designed for a
specific task. Sound cards, modems, and video cards often contain ASICs.
Example: base band processing mobile phone, Chipsets in PCs and MPEG
Encoders/Decoders.
We are in the midst of a visually enchanting world, which manifests itself with a
variety of forms and shapes, colors and textures, motion and tranquility. The human
perception has the capability to acquire, integrate, arid interpret all this abundant visual
information around us. It is challenging to impart such capabilities to a machine in order to
interpret the visual information embedded in still images, graphics, and video or moving
images in our sensory world. It is thus important to understand the techniques of storage,
processing, transmission, recognition, and finally interpretation of such visual scenes. In this
book we attempt to provide glimpses of the diverse areas of visual information analysis
techniques. The first step towards designing an image analysis system is digital image
acquisition using sensors in optical or thermal wavelengths. A two dimensional image that is
recorded by these sensors is the mapping of the three-dimensional visual world. The captured
two dimensional signals are sampled and quantized to yield digital images. Sometimes we
receive noisy images that are degraded by some degrading mechanism. One common source
of image degradation is the optical lens system in a digital camera that acquires the visual
6
information. If the camera is not appropriately focused then we get blurred images. Here the
blurring mechanism is the defocused camera. Very often one may come across images of
outdoor scenes that were procured in a foggy environment. Thus any outdoor scene captured
on a foggy winter morning could invariably result into a blurred image. In this case the
degradation is due to the fog and mist in the atmosphere, and this type of degradation is known
as atmospheric degradation. In some other cases there may be a relative motion between the
object and the camera.
Thus if the camera is given an impulsive displacement during the image capturing
interval while the object is static, the resulting image will invariably be blurred and noisy. In
some of the above cases, we need appropriate techniques of refining the images so that the
resultant images are of better visual quality, free from aberrations and noises. Image
enhancement, filtering, and restoration have been some of the important applications of image
processing since the early days of the field Segmentation is the process that subdivides an
image into a number of uniformly homogeneous regions. Each homogeneous region is a
constituent part or object in the entire scene. In other words, segmentation of an image is
defined by a set of regions that are connected and no overlapping, so that each pixel in a
segment in the image acquires a unique region label that indicates the region it belongs to.
Segmentation is one of the most important elements in automated image analysis, mainly
because a t this step the objects or other entities of interest are extracted from an image for
subsequent processing, such as description and recognition. For example, in case of an aerial
image containing the ocean and land, the problem is to segment the image initially into two
parts-land segment and water body or ocean segment. Thereafter the objects on the land part
of the scene need to be appropriately segmented and subsequently classified. After extracting
each segment; the next task is to extract a set of meaningful features such as texture, color, and
shape. These are important measurable entities, which give measures of various properties of
image segments. Some of the texture properties are coarseness, smoothness, regularity, etc.,
while the common shape descriptors are length, breadth, aspect ratio, area, location, perimeter,
compactness, etc. Each segmented region in a scene may be characterized by a set of such
7
features. Finally based on the set of these extracted features, each segmented object is
classified to one of a set of meaningful classes. In a digital image of ocean, these classes may
be ships or small boats or even naval vessels and a large class of water body. The problems of
scene segmentation and object classification are two integrated areas of studies in machine
vision. Expert systems, semantic networks, and neural network-based systems have been
found to perform such higher-level vision tasks quite efficiently. Another aspect of image
processing involves compression and coding of the visual information. With growing demand
of various imaging applications, storage requirements of digital imagery are growing
explosively. Compact representation of image data and their storage and transmission through
communication bandwidth is a crucial and active area of development today. Interestingly
enough, image data generally contain a significant amount of superfluous and redundant
information in their canonical representation. An image compression technique helps to reduce
the redundancies in raw image data in order to reduce the storage and communication
bandwidth.
2.2.1 Image
• Picture, photograph
• Visual data
• Usually two or three dimensional
Digital image:
They are also called binary images, containing 1 for white and 0 for black.
They are also called Intensity Images, containing numbers in the range of 0 to 255 or 0
to 1.
Manufacturers favour machine vision systems for visual inspections that require high-
speed, high-magnification, 24-hour operation, and/or repeatability of measurements.
Frequently these tasks extend roles traditionally occupied by human beings whose degree of
failure is classically high through distraction, illness and circumstance. However, humans may
display finer perception over the short period and greater flexibility in classification and
adaptation to new defects and quality assurance policies.Computers do not 'see' in the same
way that human beings are able to. Cameras are not equivalent to human optics and while
people can rely on inference systems and assumptions, computing devices must 'see' by
examining individual pixels of images, processing them and attempting to develop conclusions
with the assistance of knowledge bases and features such as pattern recognition engines.
Although some machine vision algorithms have been developed to mimic human visual
perception, a number of unique processing methods have been developed to process images
and identify relevant image features in an effective and consistent manner. Machine vision and
computer vision systems are capable of processing images consistently, but computer-based
10
image processing systems are typically designed to perform single, repetitive tasks, and
despite significant improvements in the field, no machine vision or computer vision system
can yet match some capabilities of human vision in terms of image comprehension, tolerance
to lighting variations and image degradation, parts' variability etc.
A typical machine vision system will consist of several among the following components:
1. One or more digital or analog camera (black-and-white or color) with suitable optics
for acquiring images
2. Camera interface for digitizing images (widely known as a "frame grabber")
3. A processor (often a PCor embedded processor, such as a DSP)
4. (In some cases, all of the above are combined within a single device, called a smart
camera).
5. Input/Output hardware (e.g. digital I/O) or communication links (e.g. network
connection or RS-232) to report results
6. Lenses to focus the desired field of view onto the image sensor.
7. Suitable, often very specialized, light sources (LEDilluminators, fluorescent or halogen
lamps etc.)
8. A program to process images and detect relevant features.
9. A synchronizing sensor for part detection (often an optical or magnetic sensor) to
trigger image acquisition and processing.
The sync sensor determines when a part (often moving on a conveyor) is in position to be
inspected. The sensor triggers the camera to take a picture of the part as it passes beneath the
camera and often synchronizes a lighting pulse to freeze a sharp image. The lighting used to
illuminate the part is designed to highlight features of interest and obscure or minimize the
11
appearance of features that are not of interest (such as shadows or reflections). LED panels of
suitable sizes and arrangement are often used to this purpose.The camera's image is captured
by the framegrabber or by computer memory in PC based systems where no framegrabber is
utilized.
Commercial and open source machine vision software packages typically include a number of
different image processing techniques such as the following:
In most cases, a machine vision system will use a sequential combination of these processing
techniques to perform a complete inspection. E.g. A system that reads a barcode may also
check a surface for scratches or tampering and measure the length and width of a machined
component.
13
The applications of machine vision (MV) are diverse, covering areas of endeavour including,
but not limited to:
CHAPTER-III
FPGA Implementation For Humidity And Temperature Remote Sensing System [1]
14
This paper present the hardware design and implementation of a remote sensing system
for humidity and temperature in real time .The design based on using FPGA (Field
Programmable Gate Array) fro the hardware implementation of the controller circuit and GSM
(Global System for Mobile) for remote monitoring. The controller circuit has been describing
using VHDL (VHSIC Hardware Description Language). The design has been simulated using
Modelsim from model technology and implemented using Xilinx ISE 6.2I software tools.
FPGA Spartan 3E starter kit from digiolent has been used for the hardware implementation of
the controller circuit. The system offers low cost and user-friendly way of 24 hours real-time
remote monitoring for temperature and humidity using SMS (Short Messaging Service)
message
This paper presents processor architecture for high-speed stereo matching based on
SAD (Sum of Absolute Difference) computation. To reduce its computational complexity, an
hardware-oriented algorithm exploiting common intermediate results between Sods is
proposed. When designing the image processor, one most vertical issue is to find the
scheduling that reduces the data transfer amount between the external image memory and the
on-chip local memory modules while maintaining the degree of parallelism. For the purpose,
they propose the scheduling that exploits disparity-level parallelism although conventional
scheduling approaches exploits window-level parallelism .The result of the FPGA
Implementation shows that the frame rate of 200 frame/sec is achieved by using 64 PES
implemented on a single on a single ALTERA stratrix EPIS40F1020C7 at 80MHZ for VGA
image
This paper presents a VLSI processor for reliable stereo matching to establish
correspondence between images by selecting a desirable window size for sum of absolute
differences (SAD) computation. In SAD computation, a degree of parallelism between pixels
in a window changes depending on its window size, while a degree of parallelism between
windows is predetermined by the input-image size. Based on this consideration, a window-
parallel and pixel serial architecture is also proposed to achieve 100% utilization of processing
elements. Not only 100% utilization but also a simple interconnection network between
memory modules and processing elements makes the VLSI processor much superior to the
pixel-parallel-architecture-based VLSI processors.
A stereovision system that is required to support high-level object based tasks in a tele-
operated environment is described. Stereovision is computationally expensive, due to having
to find corresponding pixels. Correlation is a fast, standard way to solve the correspondence
problem. This paper analyses the behavior of correlation-based stereo to find ways to improve
its quality while maintaining its real-time suitability. Three methods are suggested. Two of
them aim to improve the disparity image especially at depth discontinuities, while one targets
the identification of possible errors in general. Results are given on real stereo images with
ground truth. A comparison with five standard correlations Methods shows that improvements
of simple stereo correlation are possible in real-time on current computer hardware.
Hardware square-root units require large numbers of gates even for iterative
implementations. In this paper, we present four low-cost high-performance fully pipelined n-
select implementations (nS-Root) based on a non-restoring-remainder square root algorithm.
The nSRoot uses a parallel array of carry-save adders (CSAs). For a square root bit
calculation, a CSA is used once. This means that the calculations can be fully pipelined. It also
Uses the n-way root-select technique to speedup the square root calculation. The
cost/performance evaluation shows that n=2 or n=2.5 is a suitable solution for designing a
High-speed fully pipelined square root unit while keeping the low-cost.
general-purpose processors are used to accelerate the corresponding search. One commonly
used method to establish correspondence between images is the SAD (Sum of Absolute
Differences) method. Given a pixel L in one image (reference image), an SAD is computed
between a rectangular window centered at L and a candidate window of each possible location
in another image (candidate image).In usual cases, the window sizes are empirically pre-
determined, and the candidate pixel with the smallest SAD is determined to be the
corresponding pixel of L. The major problem on the SAD-based matching is that a window
size for SAD computation must be large enough to avoid ambiguity but small enough to avoid
the effects of projective distortions To solve the problem, this paper proposes a VLSI oriented
stereo matching algorithm with variable window sizes. The method is mainly based on an idea
that an SAD graph has a unique and clear minimum at the reliable matching pixel. A window
size is iteratively enlarged to select as small a window for each pixel as possible that can avoid
ambiguity based on uniqueness of a minimum of an SAD graph. In designing its VLSI
processor, the major consideration is to achieve high utilization of processing elements (PES)
for SAD computation. In SAD computation, a degree of parallelism between pixels in a
window changes depending on its window size. Pixel-parallel SAD computation results in low
utilization since many PES may not be utilized for a small window size. To solve this
problem, an SAD is computed in a pixel-serial manner where a single. Absolute difference
(AD) is computed in each control step. The regular data flow of the pixel-serial computation
makes it possible to fully utilize a PE for SAD computation. Moreover, in a correspondence
search, a degree of window-level parallelism is predetermined by an image width. Therefore,
candidate windows of the equal number are assigned to each PE in advance so that PES is
fully utilized. The VLSI processor is implemented in a TSMC 0.18pm CMOS process
BLOCK DIAGRAM:
IMAGE MEMORY
18
SLICE2
SAD2MIN
SUB-REG
MEMORY
MODULE
PROCESSING ELEMENT BLOCK DIGARAM:
ADD-
R1 REG
C1 R2 C2
SUB-
REG
ADD-
REG
19
SLICE
SHIFT REGISTER
SAD
The major issue in designing the SAD unit is to design a compact and high-speed PE
since the PEs occupy the most of the chip area. The fig shown the detailed structure of the PE.
Pipeline registers allows overlapping computation of an AD and addition. In order to compute
SADs using 256-level gray-scale image and the maximum window size 15× 15,an 18-bit AD
circuit and an 16-bit adder is used in the PE. The AD circuit consists of an 8-bit subtractor
and 8 Exclusive or (XOR) gate. When the borrow of the subtractor is once that is the
21
subtractor result is negative, its one’s complement is obtained by XOR-ing the resulting bits
and the borrow and a two’s complement of the subtraction result can be obtained by using the
borrow as an input carry of the least significant bit of the 16-bit adder. for area-efficient and
high-speed design, the 16-bit adder consists of an 8-bit adder and a 8-bit increment .the 8-bit
adder is used to add lower-order 8-bit since the output of the AD circuit has 8-bit by one .The
adder and the incremented operator simultaneously .when the output carry of the adder is
determined, it is used to select the proper output. An incremented is more compact and faster
than that of an adder with the same word length.
CHAPTER IV
22
4 MODIFIDED SYSTEM:
CONTROL
SIGNAL FPGA RS-232 CCD-
(IMAGE CAMERA
COMPARI
SION)
4.1.2 RS232:
23
• Electrical signal characteristics such as voltage levels, signaling rate, timing and slew-
rateof signals, voltage withstand level, short-circuit behavior, and maximum load
capacitance.
• Interface mechanical characteristics, pluggable connectors and pin identification.
• Functions of each circuit in the interface connector.
• Standard subsets of interface circuits for selected telecom applications.
Details of character format and transmission bit rate are controlled by the serial port
hardware, often a single integrated circuit called a UART that converts data from parallel
to asynchronous start-stop serial form. Details of voltage levels, slew rate, and short-circuit
behavior are typically controlled by a line-driver that converts from the UART's logic
levels to RS-232 compatible signal levels, and a receiver that converts from RS-232
compatible signal levels to the UART's logic levels.
24
Because the application of RS-232 has extended far beyond the original purpose of
interconnecting a terminal with a modem, successor standards have been developed to address
the limitations. Issues with the RS-232 standard include:
• The large voltage swings and requirement for positive and negative supplies increases
power consumption of the interface and complicates power supply design. The voltage
swing requirement also limits the upper speed of a compatible interface.
• Single-ended signaling referred to a common signal ground limits the noise immunity
and transmission distance.
• Multi-drop connection among more than two devices is not defined. While multi-drop
"work-arounds" have been devised, they have limitations in speed and compatibility.
• Asymmetrical definitions of the two ends of the link make the assignment of the role
of a newly developed device problematic; the designer must decide on either a DTE-
like or DCE-like interface and which connector pin assignments to use.
• The handshaking and control lines of the interface are intended for the setup and
takedown of a dial-up communication circuit; in particular, the use of handshake lines
for flow control is not reliably implemented in many devices.
• No method is specified for sending power to a device. While a small amount of current
can be extracted from the DTR and RTS lines, this is only suitable for low power
devices such as mice.
• While the standard recommends a 25-way connector and its pinout, the connector is
large by current standards.
This isn't feasible with USB, which requires some form of receiver to decode the serial
data.As an alternative, USB docking ports are available which can provide connectors for a
keyboard, mouse, one or more serial ports, and one or more parallel ports. Corresponding
device drivers are required for each USB-connected device to allow programs to access these
USB-connected devices as if they were the original directly-connected peripherals. Devices
that convert USB to RS-232 may not work with all software on all personal computers and
may cause a reduction in bandwidth along with higher latency.
4.1.5. 1Connectors
A specially developed CCD used for ultraviolet imaging in a wire bonded package.
27
A charge-coupled device (CCD) is a device for the movement of electrical charge, usually
from within the device to an area where the charge can be manipulated, for example
conversion into a digital value. This is achieved by "shifting" the signals between stages
within the device one at a time. Technically, CCDs are implemented as shift registers that
move charge between capacitive bins in the device, with the shift allowing for the transfer of
charge between bins.Often the device is integrated with a sensor, such as a photoelectric
device to produce the charge that is being read, thus making the CCD a major technology
where the conversion of images into a digital signal is required. Although CCDs are not the
only technology to allow for light detection, CCDs are widely used in professional, medical,
and scientific applications where high-quality image data is required.
The charge packets (electrons, blue) are collected in potential wells (yellow) created by
applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate
electrode in the correct sequence transfers the charge packets.In a CCD for capturing images,
there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out
of a shift register (the CCD, properly speaking).An image is projected through a lens onto the
capacitor array (the photoactive region), causing each capacitor to accumulate an electric
charge proportional to the light intensity at that location. A one-dimensional array, used in
line-scan cameras, captures a single slice of the image, while a two-dimensional array, used in
video and still cameras, captures a two-dimensional picture corresponding to the scene
projected onto the focal plane of the sensor. Once the array has been exposed to the image, a
control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift
register). The last capacitor in the array dumps its charge into a charge amplifier, which
converts the charge into a voltage. By repeating this process, the controlling circuit converts
the entire contents of the array in the semiconductor to a sequence of voltages, which it
samples, digitizes, and stores in memory.
28
The photoactive region of the CCD is, generally, an epitaxial layer of silicon. It has a
doping of p+ (Boron) and is grown upon a substrate material, often p++. In buried channel
devices, the type of design utilized in most modern CCDs, certain areas of the surface of the
silicon are ion implanted with phosphorus, giving them an n-doped designation. This region
defines the channel in which the photogenerated charge packets will travel. The gate oxide, i.e.
the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later on in the
process polysilicon gates are deposited by chemical vapor deposition, patterned with
photolithography, and etched in such a way that the separately phased gates lie perpendicular
to the channels.
One should note that the clocking of the gates, alternately high and low, will forward
and reverse bias to the diode that is provided by the buried channel (n-doped) and the epitaxial
layer (p-doped). This will cause the CCD to deplete, near the p-n junction and will collect and
move the charge packets beneath the gates—and within the channels—of the device.
It should be noted that CCD manufacturing and operation can be optimized for different uses.
The above process describes a frame transfer CCD. While CCDs may be manufactured on a
heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have
been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and
infrared and red response. This method of manufacture is used in the construction of interline
transfer devices.
4.2.3 Architecture
29
The CCD image sensors can be implemented in several different architectures. The
most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of
each of these architectures is their approach to the problem of shuttering.In a full-frame
device, all of the image area is active, and there is no electronic shutter. A mechanical shutter
must be added to this type of sensor or the image smears as the device is clocked or read out.
With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically
aluminium). The image can be quickly transferred from the image area to the opaque area or
storage region with acceptable smear of a few percent. That image can then be read out slowly
from the storage region while a new image is integrating or exposing in the active area.
Frame-transfer devices typically do not require a mechanical shutter and were a common
architecture for early solid-state broadcast cameras. The downside to the frame-transfer
architecture is that it requires twice the silicon real estate of an equivalent full-frame device;
hence, it costs roughly twice as much.
Sensors (CCD / CMOS) are often referred to with an imperial fraction designation
such as 1/1.8" or 2/3", this measurement actually originates back in the 1950s and the time of
Vidicon tubes. Compact digital cameras and Digicams typically have much smaller sensors
than a Digital SLR and are thus less sensitive to light and inherently more prone to noise.
Some examples of the CCDs found in modern cameras can be found in this table in a Digital
FORMULA:
30
(1)
Where
δ =Difference,
Yi=standard image,
Fi=field image.
Distance Measure:
The distance between any two pixels in a given image can be given by three different
type of measures and they are
1.Euclidian Distance
2.D4 Distance and
3.D8 Distance
1.Euclidian Distance
3.D8 Distance
The 1S means that there is no other choice. We can consider the 1S-Root as the basis
of the nS-Root implementations. Except for the first-time iteration, the non-restoring
remainder algorithm can be presented as below. If Qi = 1, ri+1 = 4ri − (4qi + 1), else ri+1 = 4ri
+ (4qi + 3). The first-time iteration always subtracts 1 from 4r0. Because if Qi = 1, qi = 2qi−1
+ 1, otherwise qi =2qi−1 + 0, the algorithm turns to: if Qi = 1, ri+1 = 4ri − (8qi−1 +5), else
ri+1 = 4ri +(8qi−1 +3). For any binary numbers u and v, u − v = u + (−v) = u + v + 1, we can
replace 4ri − (8qi−1 + 5) with 4ri + (8qi−1 + 3). We get a new presentation of the algorithm as
below.
1. R0 = D × 2−32, q0 = 0, r1 = 4r0 + (−1);
2. If r1 _ 0, q1 = Q1 = 1, else q1 = Q1 = 0;
3. For i = 1 to 15 do If Qi = 1, ri+1 = 4ri + (8qi−1 + 3); else ri+1 = 4ri + (8qi−1 + 3);
If ri+1 _ 0, qi+1 = 2qi + 1; else qi+1 = 2qi + 0; 4. If r16 < 0, r16 = r16 + (2q16 + 1);
Fig.4.4.1 illustrates the square root calculations by using a parallel CSA array. The ith partial
remainder ri is pre-sented by two groups of data, carry bits (Bij) and sum bits (Aij). The Qij
means Qj _ Qi that implements qi or qi: if Qi = 1, Qij = Qj, else Qij = Qj. Notice that Qi0
=Qi.Because the 011 is always added to the lowest three bits of partial remainder, the Bij for j
= i − 1, i, i + 1, i + 2 and Aij for j = i, i + 1, i + 2 can be simplified as shown in the figure. The
concept diagram of the figure is shown in Fig 4.4.2. We will use such figure style in the
following discussion.
33
The input of the circuit is the radicand D and the output is the square root Q. The
outputs of the CSA at stage I are named with Aij (sum bit) and Bij−1 (carry bit) for j =1, 2, ...,
i − 1.The partially developed square root qi = Q1Q2...Qi has i bits. Therefore the partial
remainder ri should be i + 1 bits. In order to check the sign of ri, it is needed to calculate ri
with only i+2 bits. Here we can use the carry-look ahead circuit to determine Qi.
Where Gij = AijBij and Pij = Aij + Bij. The circuit for generating a bit of resulting value is
simpler than a carry look ahead adder (CLA) because the CLA needs to generate all of the
carry bits for fast addition, but here, it requires to generate only a single carry bit. We can use
a special technology to speed the carry look a head circuit. It was developed by Rowen,
Johnson, and Rise and used in MIPS R3010 floating point coprocessor for divider’s quotient
logic, fraction zero-detector, and others. By using this technology, the Qi can be obtained with
four-level gates, i.e., two times compared to CSA (the CSA is implemented with two-level
gates)
BLOCK DAIGRAM:
1.5S Implementation:
In the ith iteration, the computation of ri depends on Qi−1. There are two-case
computations: If Qi−1 = 1,ri = 4ri−1 + (8qi−2 + 3), else ri = 4ri−1 + (8qi−2 +3). The Qi−1 is
derived from ri−1. Actually, the two-case computations of ri can be started in parallel
immediately after the ri−1 is known. Two-case (Qi−1 = 1 and Qi−1 = 0)
Computations are performed simultaneously. The results 12 are labeled with r1i and r0i
respectively. After Qi−1 is ready, we use a multiplexor to select a correct partial remainder (ri
in Fig.).The time required by CSAs is hidden because the additions and the Q’s generation are
performed in parallel (Q’sgeneration needs longer time than the addition). But here we used
multiplexors, which will introduce new delays. For the CSAs, only the carry out generation
needs to be duplicated, the sum generation (s = a _ b _ c) does not need to be duplicated
because a_b_c = s. This will save CSA more than 50% space. In the implementation in Fig
there still is only one root we can choose, but the number of CSAs is increased. We call it
1.5S-Root implementation.
36
CHAPTER V
CONCLUSION
In this project presents a new methodology for designing high speed image
comparison by using FPGA and reduced the comparison time. If we comparing the image by
based on threshold value in MATLAB and other technique it will take more time but our
methodology guarantees to reduced the time by comparing the image.
In the second phase of the project coding will be written and implemented
through simulation.
37
RESULT
REFERENCES