Sunteți pe pagina 1din 10

LabVIEW Machine Vision for Line Detection

EGR315: Instrumentation
Spring 2010
Than Aung and Mike Conlow
Department of Physics and Engineering
Elizabethtown College
Elizabethtown, Pennsylvania
Email: aungt@etown.edu, and conlowm@etown.edu.

Abstract – To improve the previous


package a prototype virtual instrument was
development of a visual obstacle
developed to attempt to improve processing
avoidance algorithm without using
speed by using LabVIEW for the entire
proprietary software that is not
image processing procedure through a USB
financially practical to purchase. The
webcam [6].
goal is to use more advanced methods to
There were several improvements that
produce more concise output in terms of
needed to be made to the prototype in order
turning angle and the nearest point of
to justify its implementation over the
interest.
previous vision system. The turning
I. Introduction
algorithm depended on a set of line
The following data is an analysis of
detection sub virtual instruments that
improvements made to a system previously
generated large amounts of noise due to
developed using NI Vision Development
inadequate intensity filtering. To resolve
software to detect white lines on non
these and other issues the filtering,
uniform grass. The need for this arose due to
thresholding, and line detection were
over complexity of a vision system that is on
programmed using base package of
an autonomous robot that is used as a
LabVIEW, using LabVIEW IMAQ and
learning platform. The current system [5]
IMAQ USB to capture the images from a
uses a DVT Legend 554C that collects and
webcam [6].
filters the images internally and transmits
The result is a great improvement over the
the relevant data, via TCP/IP string, to a
previous version. Further enhancements still
LabVIEW program that is a closed loop
need to be implemented in order to operate
motor control. During the fall semester of
properly in the field, but the goals that were
2009 using the NI Vision Development
set for this semester have been met.
II. Background calculated by averaging HL1 and HL2. The
The previous project mainly employed NI line angle is then calculated by find the
Vision Development Module 9.0 (Trial angle between HCL and VCL, by using
Version), which provides various image
processing and machine vision tools. By
Where m2 is the slope of HCL and m1 is
using the edge-detection sub virtual
the slope of VCL. By using the intersection
instrument we implemented the following
point and the angle between HCL and VCL,
line detection algorithm.
the appropriate heading for the robot is
The image resolution is set to 320x240
determined.
pixels capturing at 8 frames per second.
Although the algorithm seems simple
Each frame is then converted to an 8 bit
enough, there are a lot of drawbacks. First,
gray-scale image and then the image is
when converting from 32 bit color images to
segmented into regions as follows [6]:
8 bit gray-scale images there is a loss of
edge information in every frame. In the
presence of background noises it is very
difficult to detect stable edges, thereby
making line detection less accurate. Second,
using four edge detectors is unnecessarily
redundant, and over-use of edge detectors
results in slower processing. Third, we did
not have time to implement the filters to
Figure 1: Edge Detection Regions
eliminate the noises and to threshold the
White lines are detected with IMAQ Edge
unnecessary pixels information. Finally,
Detection by finding lines in the eight
since we used the 30-day trial version of NI
boarder regions represented in green in
Vision Development Module, to continue
Figure 1. In our algorithm, we use two
using the program the only option was to
vertical lines (VL1 and VL2) and two
purchase the three thousand dollar full
horizontal lines (HL1 and HL2) detectors.
version.
VCL (Vertical Center Line) is then
Therefore, the primary motivation of our
calculated by averaging VL1 and VL2.
project was to solve the problems we faced
Likewise, HCL (Horizontal Center Line) is
by using NI Vision Development, and amount of blue the most intense blues would
improve upon the shortcomings of the first be whites.
project. With the main goals in mind, we In binary format, 32 bits color is
developed the second version of our line represented as follows:
detection algorithm. Alpha Red Green Blue
III. Implementation xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx

Our project goals were to reduce the


noises during the image acquisition, enhance (x is a binary 1 or 0). In order to extract Blue
the edge information, and stabilize the color information, we performed an AND
detected line even with the background operation with the 32 color bits with the
reflections and light sources. Therefore, we following binary bit mask [2].
divided the project into different modular 0000 0000 0000 0000 0000 0000 1111 1111

processes to achieve these goals.


A. Single Color Extraction
The images acquired from the camera
(Creative VF-0050) are 320x240 32-bit
color images. Although we can simply
convert the 32-bit color (RGB) images to 8-
bit grayscale images by averaging the 32-bit
color, we have learned how to use a better
Figure 2: 32-bit Color Image
method for eliminating the noises and
enhancing the edge information. Since the
background of the images is mostly green,
we decided that if we just simply extract the
blue color pixels from RGB the images, we
can reduce the noises and enhance the white
color lines. The thought process behind this
is that the dirt and grass are mostly
Figure 3: 8-bit Blue Color Image
composed of reds and greens, so if we were
to only look at objects composed of some
Figure 5: LabVIEW Convolution
The figure above is the virtual instrument
for a convolution, where X is the image
matrix and Y is the convolution kernel.
Since there are many convolutions being
performed the algorithm is using frequency
Figure 4: Blue Color Extraction domain convolution. The way the operation
By extracting the blue color plane from an is performed first requires that the image
image only pixels with a high blue intensity matrix is padded horizontally and vertically
will appear white. This will reduce some by one minus the width and height of the
noise from high intensity greens and reds. kernel [4]. Then by shifting the kernel over
To eliminate noise from natural reflections a the image the padded matrix is given the
spatial convolution using a flattening filter values from the convolution, which is the
will be used to further enhance the image Fourier Transform of X and Y multiplied
edges. together as a dot product in terms of two
B. Spatial Convolution Filter dimensional matrices and then the result is
To prevent large quantities of noise it was transformed back to give the value desired
necessary to implement a convolution using in the original resolution of 320x240 [4].
a 7x7 flatten kernel [2]. Since the image is The following figure is a representation of
represented as a two dimensional matrix of how special convolution is utilized to apply
blue intensity values, after the color a kernel to a simple set of data.
extractions, a convolution using a kernel of
all ones will be applied to the image
reducing high level intensity values. The
reason this is done after the blue plane is
extracted is to prevent the high intensity
greens and reds from mixing with blues
causing the blue plane to be inaccurate when
extracted.

Figure 6: Padding and Filtering


From the resulting matrix it is possible to of each intensity value. Once we know the
get the average value of every 7x7 value by intensity values of the image and their
dividing the elements by forty-nine to get frequencies, it will become easier for us to
the values in the matrix in terms one byte determine the edges that we are interested.
per index. Now all the high intensity noise
that would have thrown off the later line
detection should be gone as long as the noise
doesn’t appear in large groups.
From the flattened image it should now be
much easier to find the white lines.
However, since the image was flattened the
line edges will not be as intense as they
were. So the next step is to determine the Figure 7: Intensity Histogram
highest intensity values in the image to In Figure 7, we can see clearly that the
attempt to only detect the highest intensities maximum image intensity is around 200,
in a set range. and minimum image intensity is around 15.
C. Intensity Analysis However, even with the different
After the single-color extraction and background lighting, there is one thing we
filtering, the image seems to be ready for know for sure: the white lines always have
edge detection. However, there is still one the maximum intensity. Therefore, if we can
problem we have to solve before performing extract the intensity range from 180 to 200,
the edge detection. Under non-uniform we can detect the white lines of the image.
background lighting, the maximum image
intensity and intensity distribution of the
image change accordingly, and it is almost
impossible to perform normal thresholding
to detect the edges. Therefore, we need to
analyze the image intensity distribution. In
order to do so, we first tried to acquire the
intensity histogram of the image, which
includes both intensity range and frequency
this case we will use a variable threshold
value, which will be adjusted according to
the background lighting as discussed in the
previous section. Since we already know the
maximum intensity of the image from the
intensity analysis, then we will calculate the
variable threshold as follows:

Figure 8: Intensity Analysis


In Figure 8 it can be seen that the highest
value found in the histogram is being passes
on to the next part of the program. Also at
the bottom of the figure there is a user Figure 9: Adaptive Thresholding

control called Interval that contains the


range of intensities accepted as white. This
is where the adaptive threshold will receive
its maximum and intensity range from.
D. Adaptive Thresholding
Actually, thresholding is the simplest most
powerful method for image segmentation.
Mathematically, thresholding can be Figure 10: Thresholded Image/ Interval = 20

described as [1]: E. Hough Transformation


Once we get the edge pixels after the
adaptive thresholding we need to link them
, where f(x,y) is the input image, g(x,y) is together to form a meaningful line. To
the threshold output image and T the accomplish this task a Hough
threshold. Generally, thresholding uses the Transformation will be used to bridge any
fixed value of T to segment the images. In gaps in the line that may appear. This will
give us the position and direction of the line
in the field of view [1] [2].
In Hough Transformations, each pixel (x,
y) is transformed from Cartesian Space to
Hough Space, H (R,θ), as follows:

, where 0 < R < and


. If two pixels (x1, y1) and (x2, y2) are Figure 11: Hough Space (R, θ)
co-linear, we will get the same value for R Once we get the R and θ, we need to shift
and θ. In other words, a line in Cartesian them as follows:
Space is represented as a point in Hough
Space. A simple Hough Transformation can
be achieved by using a two-dimensional
accumulator (array), which corresponds to R
and θ. Each cell of the accumulator is
defined by unique R and θ values, and the
value inside each cell is increased according
to the number of co-linear points in
Cartesian space. However, for practical
purposes, this algorithm is too slow for real-
time image processing. Therefore, we must
use the Matlab ‘hough function’ for our line-
detection [2].
Once the accumulator is filled, we look for
the maximum cell value stored in the
accumulator and its related R and θ. The
resulting R and θ represent the line we are
interested. The most efficient way to implement these
equations is to use a formula node.
Once the line is generated in the Hough IV. Results and Performance Analysis
Transformation the line values are sent to a In order to test the reliability and
line detection algorithm to determine how to performance of our algorithm, we carried
properly handle the possibility of crossing out a series of tests with different scenarios.
the line. The results were captured and are shown
F. Line Detection Algorithm below; for each set of conditions there is a
Once we get the values of x, y, x1 and y1, picture of what the camera sees followed by
we use the following line detection what the program interprets as the proper
algorithm to calculate the line angle. This avoidance maneuver along with where the
will be used to determine whether the robot nearest line is.
needs to turn, along with what direction and
intensity the turn should be made.

If (x > 0) AND
Yes
(y > 0)

No

If ( α < 30) AND


(α > 0)
If ( α < 30) AND
(α > 0)
No
No

Yes
Yes

Go Straight Turn Left Go Straight Turn Right

Right = x1 > 160


Left = Not (Right)

Figure 12: Turning Algorithm


Right will tell us if the detected line is
Test 1: Simple Right
located on the left side of the camera and
Left will tell us if the detected line is located
on the right side of the camera. These are
decided by what x coordinate pixel the
nearest line is on at the bottom row of the
image.
Test 2: Right w/ Obstacle
Test 4: Simple Left

Test 3: Parallel
Test 5: Left w/ Obstacle the system would need the guarantee that
According to the test results, we found that either the left or right line would be in the
our new algorithm can give more accurate field of view. For our purposes this is not an
and reliable results than our old algorithm. acceptable loss, but the improvements made
In addition, since we do not use NI Vision are still enough to show definite
Development, and wrote the whole project improvement over the previous prototype.
with the intrinsic LabVIEW functions, we VI. References
1. Davies, E.R. Machine Vision. 2nd ed. San Diego Academic
also solved the problems related to software Press., 1997. 80-269.
2. González, Rafael C.; Woods Richard Eugene; and Eddins
expiration. One problem that still needs to Steven L. Digital Image Processing with MATLAB. Pearson
Prentice Hall., 2004.380-406.
3. Jahne Bernd. Digital Image Processing. 6th ed. Heidelberg:
be dealt with is if no line is present the Springer-Verlag., 2005.331-340.
4. "NI Vision Acquisition Software." National Instrument. 30 Nov
adaptive threshold will give the line 2009. http://sine.ni.com/psp/app/doc/p/id/psp-394 >
5. Painter, James G. Vision System for Wunderbot IV Autonomous
detection of the largest set of intensities. Robot. Elizabethtown College, 9 May 2008.
6. Aung, Than L. & Conlow Michael. Alternative Vision System
This has to be solved before the system can for Wunderbot V. Elizabethtown College, 9 Dec 2009.

be declared a fully functional obstacle


avoidance utility.
V. Further Improvements
Although our algorithm is satisfactory to
some extent, there is a lot to be improved
upon that would require more time and a
budget plan for additional equipment. First
of all, we use monocular vision system, to
detect the lines. By adding a second camera
the system could be reprogrammed to have
one camera handle the left and the other
dedicated to the right line. This would allow
greater control due to the visual field being
doubled.
For the project to be a feasible substitute
for the current system the algorithm will
need the ability to distinguish whether or not
a line is even present. Without this ability

S-ar putea să vă placă și