Sunteți pe pagina 1din 15

sensors

Article
A Vision-Aided 3D Path Teaching Method before
Narrow Butt Joint Welding
Jinle Zeng, Baohua Chang, Dong Du *, Guodong Peng, Shuhe Chang, Yuxiang Hong, Li Wang
and Jiguo Shan
Key Laboratory for Advanced Materials Processing Technology, Ministry of Education, Department of
Mechanical Engineering, Tsinghua University, Beijing 100084, China; zengjl12@mails.tsinghua.edu.cn (J.Z.);
bhchang@tsinghua.edu.cn (B.C.); pengguodong12@163.com (G.P.); changsh15@mails.tsinghua.edu.cn (S.C.);
hongyuxiang@tsinghua.edu.cn (Y.H.); wanglidme@tsinghua.edu.cn (L.W.); shanjg@tsinghua.edu.cn (J.S.)
* Correspondence: dudong@tsinghua.edu.cn; Tel.: +86-10-6278-3387

Academic Editor: Vittorio M. N. Passaro


Received: 30 March 2017; Accepted: 8 May 2017; Published: 11 May 2017

Abstract: For better welding quality, accurate path teaching for actuators must be achieved before
welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may
be different from the predetermined path. Therefore, it is significant to recognize the actual groove
position using machine vision methods and perform an accurate path teaching process. However,
during the teaching process of a narrow butt joint, the existing machine vision methods may fail
because of poor adaptability, low resolution, and lack of 3D information. This paper proposes
a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual
information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting
condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition.
The 3D position and pose between the welding torch and groove can be calculated after information
fusion. The image resolution can reach 12.5 m. Experiments are carried out at an actuator speed of
2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable
for groove recognition before narrow butt joint welding and can be applied in path teaching fields of
3D complex components.

Keywords: seam tracking; visual detection; narrow butt joint recognition; position and pose detection;
automatic path teaching

1. Introduction
In 3D complex component welding fields, the relative motion path between the welding torch
and the workpiece must be reasonably planned beforehand to achieve good weld quality, so that the
welding torch is keep aligned with the groove during the whole welding process [16]. The recognition
of the actual groove position is a prerequisite for motion planning before welding. Currently,
the teaching-playback mode is widely used in most automatic actuators such as welding robots [79].
The workers recognize the groove position by their eyes and guide the actuators to move along the
groove. The actuators record the groove position during guidance so that they can move along the
recorded path in the following welding process. The teaching-playback mode is usually rather
time-consuming and not accurate enough. Many robot manufacturers have developed some offline
programming software [1013] that can perform automatic path planning by computers based on the
components CAD model. However, machining errors, assembly errors, deformations, etc. may cause
deviations between the actual groove position and the path predetermined by the teaching-playback
or the offline programming method. The teaching-playback mode and offline programming

Sensors 2017, 17, 1099; doi:10.3390/s17051099 www.mdpi.com/journal/sensors


Sensors 2017, 17, 1099 2 of 15
Sensors 2017, 17, 1099 2 of 15

technique
essential tocannot
develop correct the deviations
the automatic groovecaused by the
recognition factorstoabove.
method detect the Therefore,
3D positionit is of
essential to
the actual
develop
groove. the automatic groove recognition method to detect the 3D position of the actual groove.
Compared
Compared with withother
othergroove
groove recognition
recognition methods
methods (arc sensing
(arc sensing [14], sensing
[14], tactile tactile sensing [15],
[15], acoustic
acoustic sensing [16], eddy current sensing [17], etc.), the machine vision
sensing [16], eddy current sensing [17], etc.), the machine vision method does not need to contact themethod does not need
to contact the
workpiece andworkpiece
usually givesand usually gives more
more abundant abundant and
information information and more
more accurate accurate
results. It hasresults.
been
It has been considered to be one of the most promising groove recognition technologies
considered to be one of the most promising groove recognition technologies [18]. However, there are [18]. However,
there are stillserious
still many many serious
challengeschallenges
when when detecting
detecting the groove
the groove position
position of ofaerospace
aerospaceand and precision
precision
instrument components. First,
instrument components. First,for
forprecise
preciseand and effective
effective welding
welding without
without filling
filling tootoo
many many metals,
metals, the
the components to be welded are usually so closely assembled that no
components to be welded are usually so closely assembled that no gap is reserved purposely, whichgap is reserved purposely,
which is called
is called the narrow
the narrow butt joint.
butt joint. The section
The cross cross section
of theof the narrow
narrow butt is
butt joint joint is shown
shown in Figure
in Figure 1.
1. The
The
widthwidth of the
of the narrow
narrow buttbutt joint
joint may maybe be
lessless than
than 0.10.1mm,
mm,and andititisisnecessary
necessaryto to develop
develop aa precise
precise
detection method in this condition. Second, the camera only captures 2D
detection method in this condition. Second, the camera only captures 2D images of the workpiece images of the workpiece
surface.
surface. However,
However,in in3D3Dgroove
groovewelding
weldingfields,
fields,both
boththethe3D3Dposition
position and normal
and normal vector coordinates
vector coordinatesof
each point
of each in the
point groove
in the groove curve
curvemust be be
must obtained.
obtained.

Figure 1.1.The
Thecross
cross section
section of narrow
of the the narrow butt The
butt joint. joint. The workpieces
workpieces to beare
to be welded welded
closelyare closely
assembled
assembled
and no gapand no gap is
is reserved reserved purposely.
purposely.

The structured
The structured light
light detection
detection method
method is is currently
currently thethe most
most widely
widely usedused groove
groove recognition
recognition
technique in
technique in industries.
industries.ManyManycompanies
companies suchsuchas as
Meta
MetaVision [19],
Vision Servo
[19], ServoRobot [20],[20],
Robot Scout [21], [21],
Scout and
Scansonic [22] have developed various structured light sensors. These sensors project
and Scansonic [22] have developed various structured light sensors. These sensors project laser stripes laser stripes
onto the
onto the workpiece
workpiece surface
surface and
and the
the stripes
stripes become
become distorted
distorted due
due to
to the
the geometric
geometric differences
differences between
between
the base
the base metal
metal andand the
the groove
groove [2327].
[2327]. The
The groove
groove position
position is then recognized
is then recognized by by detecting
detecting thethe
inflection points of the laser stripe curves. Moreover, the 3D cross-section information
inflection points of the laser stripe curves. Moreover, the 3D cross-section information of the groove of the groove
can be
can be obtained
obtained based
based onon the
the triangulation
triangulation principle.
principle. The structured light
The structured light detection
detection method
method has has
proved its applicability when there are significant geometric differences between
proved its applicability when there are significant geometric differences between the base metal and the base metal and
the groove,
the groove, as as shown
shown in in Figure
Figure 2a.
2a. However,
However, it it may
may fail
fail in
in narrow
narrow butt
butt joint
joint detection
detection because
because ofof the
the
nearly zero
nearly zero gap
gap between
between thethe workpieces,
workpieces, as as shown
shown in in Figure
Figure 2b.
2b.
Researchers have proposed many other methods to achieve accurate narrow butt joint detection.
Shen [28], Chen [29], and Ma [30] captured the images of narrow butt joint in positive lighting
conditions (auxiliary light source needed) or passive lighting conditions (no auxiliary light source
needed). The position of the narrow butt joint is recognized based on the grayscale gradient differences
between the base metal and the groove. Huang used a laser stripe with a large width and projected it
onto the workpiece surface [31]. The gap of the narrow butt joint would result in the discontinuity of
the laser stripe curve, which indicates the groove position. These methods mentioned above can only
obtain the 2D groove position using the grayscale image of the workpiece surface, but the distance and
angles between the welding torch and the workpiece surface are necessary in 3D groove welding.
There are also several 3D groove detection methods proposed by other researchers.
Zheng proposed a 3D groove detection method using a circular laser spot projected onto the workpiece
surface [32,33]. The projected laser spot on the workpiece surface was deformed and a camera was
used to capture images of the elliptical (a) laser spot. The angle between (b) the workpiece surface and the
camera was calculated using the major axis length and minor axis length of the spot. In addition,
Figure 2.and
the position Thedirection
structured light detection
deviation betweenmethod appliedand
the groove in V-groove and narrow
the welding torch was buttdirectly
joint detection.
represented
(a) V-groove detection; (b) Narrow butt joint detection.
by the position and direction of the groove curve in the image. Zhengs calculation method above
Figure 1. The cross section of the narrow butt joint. The workpieces to be welded are closely
assembled and no gap is reserved purposely.

The structured light detection method is currently the most widely used groove recognition
technique
Sensors in 1099
2017, 17, industries.
Many companies such as Meta Vision [19], Servo Robot [20], Scout [21], and
3 of 15
Scansonic [22] have developed various structured light sensors. These sensors project laser stripes
onto the workpiece surface and the stripes become distorted due to the geometric differences between
implies an assumption that the camera captured images based on parallel projection principle,
the base metal and the groove [2327]. The groove position is then recognized by detecting the
which is inaccurate because most of the cameras captured image based on the perspective projection
inflection points of the laser stripe curves. Moreover, the 3D cross-section information of the groove
principle. Zeng proposed another 3D groove detection method combining various visual information
can be obtained based on the triangulation principle. The structured light detection method has
in different lighting conditions [18]. This method only gives the solution when the workpiece surface
proved its applicability when there are significant geometric differences between the base metal and
is approximately a flat plane. Moreover, no 3D groove detection experiment is shown in this method.
the groove, as shown in Figure 2a. However, it may fail in narrow butt joint detection because of the
In summary, there are no suitable 3D groove detection method in narrow butt joint recognition fields.
nearly zero gap between the workpieces, as shown in Figure 2b.

(a) (b)
Figure 2. The structured light detection method applied in V-groove and narrow butt joint detection.
Figure 2. The structured light detection method applied in V-groove and narrow butt joint detection.
(a) V-groove detection; (b) Narrow butt joint detection.
(a) V-groove detection; (b) Narrow butt joint detection.

This paper proposes a 3D groove detection method to solve the problems in narrow butt joint
recognition fields, which can be used in automatic path teaching before welding. This method captures
images when an LED surface light source and a cross-line laser light source are on. The 2D pixel
coordinates of the groove curve can be calculated when LED surface light source is on. Meanwhile,
we can obtain the 3D point cloud of the workpiece surface when cross-line laser light source is on.
After fusing these two kinds of information, we can calculate the 3D position and normal vector
coordinates of each point in the groove curve. The LED surface light source and cross-line laser
light source are both controlled by the trigger signal, which makes two light sources switch on
alternately. The camera is synchronized to capture images when each light source is on. In this
way, we can detect 3D position and pose between the welding torch and the workpiece nearly at the
same time. Finally, 3D groove detection experiments are carried out to verify the proposed method.
Calibration is needed before the system works: the intrinsic parameters of the camera are calibrated
using Zhangs method [34], the light plane equations of the cross-line laser in camera coordinate system
are calibrated using Zous method [35], and the rotational and translational transformation matrices
between camera coordinate system and world coordinate system are calibrated using a common
hand-eye calibration method.
The rest of the paper is organized as follows: Section 2 introduces the configuration of our
experiment platform and visual sensor. Section 3 illustrates the synchronization method of the camera,
LED surface light source and cross-line laser light source. We produce trigger signal at the end of each
exposure. Section 4 proposes the processing algorithm when LED surface light source is on. The pixel
coordinate of each point in the groove curve would be extracted after image processing. Section 5
proposes the processing algorithm when cross-line laser light source is on. We obtain the 3D point
cloud data of the workpiece surface. Combining the pixel coordinate of the groove curve and 3D point
cloud of the workpiece surface, we calculate the 3D position and normal vector coordinates of the
groove. 3D groove recognition experiments are carried out in Section 6, and conclusions are given in
Section 7.
Sensors 2017, 17, 1099 4 of 15
Sensors 2017, 17, 1099 4 of 15

2. Configuration of
2. Configuration of the
the Experiment
Experiment Platform
Platform
The
The experiment platform used
experiment platform used in in this
this paper
paper is is shown
shown inin Figure
Figure 3.
3. It
It is
is comprised
comprised of of aa GTAW
GTAW
welding torch, a visual sensor, a 2D translational stage and an industrial computer.
welding torch, a visual sensor, a 2D translational stage and an industrial computer. The visual sensor The visual sensor is
mounted
is mounted in front of the
in front welding
of the weldingtorch. The The
torch. world coordinate
world system
coordinate {W} is{W}
system fixed on theon
is fixed welding torch,
the welding
in which the x axis and y axis of {W} are parallel with two moving directions
torch, in which the xW axis and yW axis of {W} are parallel with two moving directions of the 2D
W W of the 2D translational
stage respectively.
translational stage The workpiece
respectively. is placed
The workpiece on the terminal
is placed onof
thethe 2D translational
terminal of the 2D stage, so that stage,
translational it can
move
so thatalong
it canthe movexW axis
alongandtheyWxWaxis
axiswith
and the translational
yW axis with the stage. The CPU
translational frequency
stage. The CPUof the industrial
frequency of
computer is 2.30 GHz and the RAM size is 4 GB.
the industrial computer is 2.30 GHz and the RAM size is 4 GB.

Figure
Figure 3.
3. The
The experiment
experiment platform
platform and
and the
the world
world coordinate system used
coordinate system used in
in this
this paper.
paper.

When the workpiece moves with the translational stage, the visual sensor detects the position of
When the workpiece moves with the translational stage, the visual sensor detects the position of
the narrow butt joint. Meanwhile, the industrial computer processes the image captured by the sensor
the narrow butt joint. Meanwhile, the industrial computer processes the image captured by the sensor
and calculates the deviations between the groove and the FOV (field of view) of the sensor. Then the
and calculates the deviations between the groove and the FOV (field of view) of the sensor. Then the
industrial computer controls the movement of the workpiece, so that the narrow butt joint always
industrial computer controls the movement of the workpiece, so that the narrow butt joint always
locates in the FOV of the sensor. In this way, the whole groove curve can be recognized and the path
locates in the FOV of the sensor. In this way, the whole groove curve can be recognized and the path
teaching process is finished.
teaching process is finished.
The configuration of our visual sensor is shown in Figure 4a. It is comprised of a Gigabit Ethernet
The configuration of our visual sensor is shown in Figure 4a. It is comprised of a Gigabit Ethernet
(GigE) camera, a narrow band filter, a LED surface light source (LSLS), and a cross-line laser light
(GigE) camera, a narrow band filter, a LED surface light source (LSLS), and a cross-line laser light
source (CLLLS). The GigE camera is Basler acA1600-60gm, whose maximum frame rate is 60 fps at
source (CLLLS). The GigE camera is Basler acA1600-60gm, whose maximum frame rate is 60 fps at
maximum image size (1200 1600 pixels). The power of the LSLS is 6.1 W and the central wavelength
maximum image size (1200 1600 pixels). The power of the LSLS is 6.1 W and the central wavelength
is about 630 nm. The wavelength range of the LSLS is about 610 nm to 650 nm. Figure 4b shows the
is about 630 nm. The wavelength range of the LSLS is about 610 nm to 650 nm. Figure 4b shows the
configuration of the LSLS. The light emitted from the LED array is reflected by the diffuse surfaces,
configuration of the LSLS. The light emitted from the LED array is reflected by the diffuse surfaces,
then passes through the bottom aperture, and finally projects onto the workpiece surface. The
then passes through the bottom aperture, and finally projects onto the workpiece surface. The reflected
reflected light from the workpiece surface passes through the bottom aperture and top aperture
light from the workpiece surface passes through the bottom aperture and top aperture sequentially
sequentially and is finally collected by the camera. The power of the CLLLS is 20 mW and the central
and is finally collected by the camera. The power of the CLLLS is 20 mW and the central wavelength
wavelength is 635 nm. The narrow band filter is used to filter out the ambient light, and its central
is 635 nm. The narrow band filter is used to filter out the ambient light, and its central wavelength
wavelength and FWHM (full width at half maximum) are 635 nm and 10 nm respectively. The light
and FWHM (full width at half maximum) are 635 nm and 10 nm respectively. The light emitted from
emitted from LSLS and CLLLS can pass through the narrow band filter. The working distance of our
LSLS and CLLLS can pass through the narrow band filter. The working distance of our visual sensor is
visual sensor is 20 mm to 40 mm, and the FOV is about 15 mm 20 mm at 30 mm in distance. When
20 mm to 40 mm, and the FOV is about 15 mm 20 mm at 30 mm in distance. When the working
the working distance is 30 mm, the image resolution of our visual sensor reaches about 12.5 m,
distance is 30 mm, the image resolution of our visual sensor reaches about 12.5 m, which is suitable
which is suitable for narrow butt joint detection with extremely small width.
for narrow butt joint detection with extremely small width.
The LSLS and CLLLS are designed to provide different visual information from the workpiece
The LSLS and CLLLS are designed to provide different visual information from the workpiece
surface. When the LSLS is on, the camera captures the grayscale image of the workpiece surface in
surface. When the LSLS is on, the camera captures the grayscale image of the workpiece surface in
uniform lighting condition, by which we obtain the 2D pixel coordinates of the narrow butt joint after
uniform lighting condition, by which we obtain the 2D pixel coordinates of the narrow butt joint after
image processing. When the CLLLS is on, the camera captures the cross-line laser stripe and we can
image processing. When the CLLLS is on, the camera captures the cross-line laser stripe and we can
calculate the 3D coordinate of each point in each stripe when considering the intrinsic parameters of
calculate the 3D coordinate of each point in each stripe when considering the intrinsic parameters of
the camera. Therefore, the 3D point cloud data of the workpiece surface can be obtained. Fusing these
the camera. Therefore, the 3D point cloud data of the workpiece surface can be obtained. Fusing these
two kinds of visual information, we can calculate the 3D position and normal vector coordinates of
two kinds of visual information, we can calculate the 3D position and normal vector coordinates of
the groove.
the groove.
Sensors 2017, 17, 1099 5 of 15
Sensors 2017, 17, 1099 5 of 15

(a) (b)
Figure 4. The configuration of our visual sensor and LED surface light source (LSLS) for narrow butt
joint detection. (a) The configuration of the visual sensor; (b) The configuration of the LSLS. The unit
of the dimensions in the figure is millimeter (mm).

3. Multiple Visual Features Synchronous Acquisition Method


We design a synchronous acquisition method to obtain two kinds of visual information in
different lighting conditions. The LSLS and CLLLS are controlled by the trigger signal, so that they
are switched on alternately. (a) The camera is then synchronized to capture (b) images when each light
source Figure 4. The configuration of our visual sensor and LED surface light sourceusing
is on. Traditionally, different information would be obtained (LSLS) different
for narrowcameras.
butt
However,Figure 4.
we The
only configuration
use one of
camera our
to visual
capture sensor and
multiple LED surface
visual light
features source
in our (LSLS)
proposed for narrow
method, butt
which
joint detection. (a) The configuration of the visual sensor; (b) The configuration of the LSLS. The unit
would joint detection.
eliminate the(a)error
The configuration
source from of the visual sensor;
calibration between (b)twoThe separate
configuration of the LSLS. The unit
measurements.
of the dimensions in the figure is millimeter (mm).
of the dimensions in the figure is millimeter (mm).
Figure 5 shows the timing diagrams of the camera exposure signal and trigger signal. The camera
works in continuous
3. Multiple acquisition
Visual Features mode, which
Synchronous means that
Acquisition the camera continuously acquires images
Method
3. Multiple
using Visual
its internal Features
clock signal.Synchronous
The camera exposure Acquisition Method
signal in Figure 5 shows the internal clock signal.
We design a synchronous acquisition method to obtain two kinds of visual information in
The camera
We design starts to capture a acquisition
a synchronous new image method when the to rising
obtainedge of theof
two kinds camera
visual exposure
information signal arrives
in different
different lighting conditions. The LSLS and CLLLS are controlled by the trigger signal, so that they
and finishes
lighting capturing
conditions. Thewhen LSLStheandfalling
CLLLSedge comes. by the trigger signal, so that they are switched
are controlled
are switched on alternately. The camera is then synchronized to capture images when each light
The triggerThe
on alternately. signal is generated
camera from the digital
is then synchronized output port
to capture imagesof thewhencamera.
each Thelighttrigger
sourcesignal
is on.
source is on. Traditionally, different information would be obtained using different cameras.
stays at high level
Traditionally, at the
different beginningwould
information and is be automatically
obtained usinginverted by software
different cameras. program
However,atwe theonly
enduseof
However, we only use one camera to capture multiple visual features in our proposed method, which
each exposure. The LSLS and CLLLS are respectively switched on during
one camera to capture multiple visual features in our proposed method, which would eliminate the the high level and low level
would eliminate the error source from calibration between two separate measurements.
period.
error sourceSincefrom
the trigger
calibrationsignal is inverted
between at the end
two separate of each exposure, it always synchronizes with
measurements.
Figure 5 shows the timing diagrams of the camera exposure signal and trigger signal. The camera
the camera
Figure shutter
5 showsand the its frequency
timing diagrams is exactly half of the
of the camera frame signal
exposure rate. Although
and trigger thesignal.
LSLS and
The CLLLS
camera
works in continuous acquisition mode, which means that the camera continuously acquires images
may be both switched on in the moment the trigger signal is inverted,
works in continuous acquisition mode, which means that the camera continuously acquires images the exposure of previous frame
using its internal clock signal. The camera exposure signal in Figure 5 shows the internal clock signal.
has
usingstopped and the
its internal clock exposure
signal. Theof next
camera frame does not
exposure begin
signal in yet (the5 rising
Figure showstime tup and falling
the internal time
clock signal.
The camera starts to capture a new image when the rising edge of the camera exposure signal arrives
tThe are both
down camera lesstothan
starts capture 15 s,
a new which
image arewhen
less the
thanrising
the blanking
edge of the time between
camera exposureadjacent
signalframes).
arrives
and finishes capturing when the falling edge comes.
Therefore,
and finishes two light sources
capturing whenwould not be
the falling edgeboth switched on in the same exposure period. In this way,
comes.
The trigger signal is generated from the digital output port of the camera. The trigger signal
different visual information will not be interfered by each other.
stays at high level at the beginning and is automatically inverted by software program at the end of
each exposure. The LSLS and CLLLS are respectively switched on during the high level and low level
period. Since the trigger signal is inverted at the end of each exposure, it always synchronizes with
the camera shutter and its frequency is exactly half of the frame rate. Although the LSLS and CLLLS
may be both switched on in the moment the trigger signal is inverted, the exposure of previous frame
has stopped and the exposure of next frame does not begin yet (the rising time tup and falling time
tdown are both less than 15 s, which are less than the blanking time between adjacent frames).
Therefore, two light sources would not be both switched on in the same exposure period. In this way,
different visual information will not be interfered by each other.

Figure
Figure 5.
5. The
The timing
timing diagrams
diagrams of
of the
the camera
camera exposure
exposure signal
signal and
and light
light source
source trigger
trigger signal.
signal.

The trigger signal is generated from the digital output port of the camera. The trigger signal
stays at high level at the beginning and is automatically inverted by software program at the end
of each exposure. The LSLS and CLLLS are respectively switched on during the high level and low
level period. Since the trigger signal is inverted at the end of each exposure, it always synchronizes
with the camera shutter and its frequency is exactly half of the frame rate. Although the LSLS and

Figure 5. The timing diagrams of the camera exposure signal and light source trigger signal.
Sensors 2017, 17, 1099 6 of 15

CLLLS may be both switched on in the moment the trigger signal is inverted, the exposure of previous
frame has stopped and the exposure of next frame does not begin yet (the rising time tup and falling
time tdown are both less than 15 s, which are less than the blanking time between adjacent frames).
Therefore, two light sources would not be both switched on in the same exposure period. In this way,
different visual
Sensors 2017, information will not be interfered by each other.
17, 1099 6 of 15
Sensors 2017, 17, 1099 6 of 15

4. 2D Narrow Butt Joint Recognition Method


4. 2D Narrow Butt Joint Recognition Method
Figure 6 shows the reflection situation of the workpiece surface in uniform lighting condition. condition.
Figure 6 shows the reflection situation of the workpiece surface in uniform lighting condition.
Although there is no gap left purposely after assembly, an an extremely
extremely small gap would exist due to
Although there is no gap left purposely after assembly, an extremely small gap would exist due to
the machining errors or slight morphological differences between the sidewalls sidewalls of two workpieces.
workpieces.
the machining errors or slight morphological differences between the sidewalls of two workpieces.
The gap may be less than 0.1 mm, but we exaggerate the gap in Figure 6 for
The gap may be less than 0.1 mm, but we exaggerate the gap in Figure 6 for better explanation. better explanation. Due to
Due
The gap may be less than 0.1 mm, but we exaggerate the gap in Figure 6 for better explanation. Due
the small
to the gap,
small some
gap, someofofthe
theincident
incidentlightlightwould
wouldpasses
passesthrough
throughthethenarrow
narrowbuttbutt joint
joint and
and some
some is
to the small gap, some of the incident light would passes through the narrow butt joint and some is
reflected
reflected ororabsorbed
absorbedby bythe
thesidewall
sidewall of of
thethegroove, as shown
groove, as shownin Figure 6. Therefore,
in Figure the reflected
6. Therefore, light
the reflected
reflected or absorbed by the sidewall of the groove, as shown in Figure 6. Therefore, the reflected
intensity in the narrow butt joint region is lower than the intensity in the base metal
light intensity in the narrow butt joint region is lower than the intensity in the base metal region. region. When the
light intensity in the narrow butt joint region is lower than the intensity in the base metal region.
LSLS
Whenin our
the sensor
LSLS is on,
in our there
sensor is dark
is on, therecurve
is darkincurve
the captured image, indicating
in the captured the position
image, indicating of the
the position
When the LSLS in our sensor is on, there is dark curve in the captured image, indicating the position
narrow butt joint.
of the narrow buttIn order
joint. In to recognize
order the extremely
to recognize small gap
the extremely of gap
small the narrow butt joint,
of the narrow buttthe image
joint, the
of the narrow butt joint. In order to recognize the extremely small gap of the narrow butt joint, the
resolution of our sensor
image resolution is designed
of our sensor to be about
is designed 12.5 m
to be about atm
12.5 30 mm
at 30distance
mm distanceas mentioned above.
as mentioned above.
image resolution of our sensor is designed to be about 12.5 m at 30 mm distance as mentioned above.

Figure 6. The reflection situation of the workpiece surface in uniform lighting condition.
Figure 6. The
Figure 6. The reflection
reflection situation
situation of
of the
the workpiece
workpiece surface
surface in
in uniform
uniformlighting
lightingcondition.
condition.

Figure 7 shows one of the images captured by the camera when the LSLS is on. The actual width
Figure 77 shows
Figure shows one
oneisofofthe
theimages
images captured byby thethe
camera when the LSLS is on.isThe
on.actual width
of the narrow butt joint about 0.1 mm.captured
The grayscale ofcamera
the basewhen
metalthe LSLS
is nearly Theand
saturated actual
the
of the
width narrow
of the butt
narrow joint is
buttbuttabout
jointjoint0.1 mm.
is about The
0.1 mm. grayscale of the base metal is nearly saturated and the
grayscale of the narrow is nearly zero,The grayscale
which of the to
is beneficial base
an metal
accurateis nearly saturated
and fast groove
grayscale
and of the narrow
the grayscale butt joint
of the narrow is nearly
butt joint is zero,
nearlywhich
zero,iswhich
beneficial to an accurate
is beneficial and fastand
to an accurate groove
fast
detection.
detection.
groove detection.

Figure 7. The grayscale image captured by the camera when the LSLS is on.
Figure 7.
Figure 7. The
The grayscale
grayscale image
image captured
captured by
by the
the camera
camera when
whenthe
theLSLS
LSLSisison.
on.

According to the image captured above, we propose a corresponding processing algorithm to


According to
According to the
the image
image captured
captured above,
above, we
we propose
propose aa corresponding
corresponding processing
processing algorithm to to
calculate the 2D pixel coordinates of the narrow butt joint. Considering that the FOV of algorithm
our sensor is
calculate
calculate the 2D
the 2D pixel coordinates
pixel coordinates of the narrow butt joint. Considering that the FOV of our sensor is
small (about 15 mm 20 mm at 30of mm thedistance),
narrow butt
the joint.
groove Considering that the
can be partially FOVas
treated ofaour sensor
straight is
line
small (about 15 mm 20 mm at 30 mm distance), the groove can be partially treated as a straight line
small (about 15 mm
approximately. The 20 mm
steps at 30
of our mm processing
image distance), the groove can
algorithm be follows:
are as partially treated as a straight line
approximately. The steps of our image processing algorithm are
approximately. The steps of our image processing algorithm are as follows: as follows:
1. Sub-image division. We divide the original image into several sub-images. In each sub-image,
1. Sub-image division. We divide the original image into several sub-images. In each sub-image,
the groove is more like a straight line. The original image is divided into four sub-images along
the groove is more like a straight line. The original image is divided into four sub-images along
its column direction in this paper.
its column direction in this paper.
2. Valid pixel points extraction. Let the threshold T be 20% of the maximum grayscale in the image.
2. Valid pixel points extraction. Let the threshold T be 20% of the maximum grayscale in the image.
Scan each column of the sub-image and find the longest line segment whose grayscale is less
Scan each column of the sub-image and find the longest line segment whose grayscale is less
than T. Denote the center of the longest line segment in each column as the valid pixel point.
than T. Denote the center of the longest line segment in each column as the valid pixel point.
Sensors 2017, 17, 1099 7 of 15

1. Sub-image division. We divide the original image into several sub-images. In each sub-image,
the groove is more like a straight line. The original image is divided into four sub-images along
its column direction in this paper.
2. Valid pixel points extraction. Let the threshold T be 20% of the maximum grayscale in the image.
Scan each column of the sub-image and find the longest line segment whose grayscale is less than
T. Denote the center of the longest line segment in each column as the valid pixel point.
3. Hough transform [36]. We apply Hough transform to each sub-image and only the valid pixel
points are considered when transforming. In Hough transform, each valid pixel point (x,y) in the
Sensorsimage corresponds
2017, 17, 1099 to a sine curve = xcos + ysin in the parameter space (,), where and 7 of 15

are the dependent and independent variables in the parameter space respectively. The Hough
and are thematrix
accumulator dependent
A(,)and independent
is obtained variables
afterward. Sincein the
thegroove
parameteris notspace respectively.
exactly The
a straight line,
Hough
the resolution of Hough accumulator should not be too small. We set the Hough accumulatora
accumulator matrix A(,) is obtained afterward. Since the groove is not exactly
straight
resolution line,
to the
=resolution
10 and of= Hough
5 . accumulator should not be too small. We set the Hough
4. accumulator resolution to
Maximum accumulator = 10 and
selection. Find the= maximum
5. position Amax in the Hough accumulator
4. Maximum accumulator selection. Find
matrix A(,). The maximum accumulator Amax indicates the maximum position theAmax in the of
position Hough accumulator
the groove curve.
matrix A(,). The maximum accumulator A max indicates the position of the groove curve.
Traverse all of the valid pixel points and find the corresponding points whose mapped sine
Traverse
curves = allxcos
of the
+valid
ysin pixel
pass points
through and thefind the corresponding
maximum accumulatorpoints Amax . whose mapped
These points aresine
all
curves = xcos + ysin
located in the groove curve. pass through the maximum accumulator A max. These points are all

5. located in the groove curve.


Linear interpolation. According to the points found in Step 4, we use the linear interpolation
5. Linear interpolation.
method to calculate the According to thecurve.
whole groove points found in Step 4, we use the linear interpolation
method to calculate the whole groove curve.
Figure
Figure 88 shows
shows the theflow
flowofofourourprocessing
processingalgorithm.
algorithm. The The final
final detection
detection result
result is marked
is marked in
in red
red in Figure 8. Our algorithm is used to process over 10,000 test images
in Figure 8. Our algorithm is used to process over 10,000 test images (different materials such as (different materials such as
aluminum
aluminum alloy,alloy, stainless
stainless steel,
steel, etc.,
etc., different
different groove
groove curves
curves such
such asas S-curve, straight line,
S-curve, straight line, polyline,
polyline,
etc.,
etc., different surface roughness from 0.8 m to 12.5 m). The results show that the time cost of
different surface roughness from 0.8 m to 12.5 m). The results show that the time cost of our
our
proposed algorithm is less than 5 ms per frame; the detection error between
proposed algorithm is less than 5 ms per frame; the detection error between the detection result and the detection result and
the
the actual
actual groove
grooveposition
positionisisnotnotmore
more than
thanfive pixels.
five It indicates
pixels. thatthat
It indicates our our
proposed
proposednarrow butt joint
narrow butt
detection method is suitable for real-time and accurate path teaching
joint detection method is suitable for real-time and accurate path teaching process. process.
Since
Since our
our proposed
proposed algorithm
algorithm uses uses Hough
Hough transform
transform to to detect
detect the
the groove
groove curve
curve and
and Hough
Hough
transform
transform is not sensitive to the noises [36], it will not find a scratch instead of the joint as long
is not sensitive to the noises [36], it will not find a scratch instead of the joint as long as
as the
the
length of the groove curve is longer than the length
length of the groove curve is longer than the length of the scratch. of the scratch.

Figure
Figure 8.
8. The
The output
output of
of each
each step
step in
in the
the proposed narrow butt
proposed narrow butt joint
joint detection
detection algorithm.
algorithm.

5. Calculation of 3D Position and Normal Vector Coordinates of the Groove


When the LSLS is on, we can only obtain the 2D pixel coordinates of the groove curve. The 3D
information of the groove is still unknown. In this section, we use the cross-line laser stripes to
calculate the 3D position and normal vector coordinates of the groove curve. Figure 9 shows the scene
when CLLLS is on. The camera coordinate system {C} is established using Zhangs method [34].
Sensors 2017, 17, 1099 8 of 15

5. Calculation of 3D Position and Normal Vector Coordinates of the Groove


When the LSLS is on, we can only obtain the 2D pixel coordinates of the groove curve. The 3D
information of the groove is still unknown. In this section, we use the cross-line laser stripes to calculate
the 3D position and normal vector coordinates of the groove curve. Figure 9 shows the scene when
CLLLS is on. The camera coordinate system {C} is established using Zhangs method [34].
Sensors 2017, 17, 1099 8 of 15

Figure 9. The principle of the cross-line laser method.


Figure 9. The principle of the cross-line laser method.

According to the pinhole model of the camera [34], the 2 1 pixel coordinate p in the image
According
corresponds to to the3D
any pinhole
point model
along aofcertain
the camera the 2 1ofpixel
[34],equation
line. The this coordinate p in coordinate
line in camera the image
corresponds
system {C} istodetermined
any 3D pointby,along a certain line. The equation of this line in camera coordinate system
{C} is determined by,
ff11((pp))


PC = PzCC Sz(Cp)S(=p)zC z C ff22(p) ,, (1)
(1)
1

wherePPCC is
where is aa 33
11 vector
vector representing
representing any any point
point onon the line; zzCCisisthe
the line; thethird componentofofPPC;Cand
thirdcomponent ; andf1
fand
1 and f 2
f2 are are the mapping functions from pixel coordinate in the
the mapping functions from pixel coordinate in the image to 3D coordinate in camera image to 3D coordinate in camera
coordinate
coordinatesystem system{C}, {C},which
whichare aredetermined
determinedby bythe
theintrinsic
intrinsicparameters
parametersof ofthe
thecamera
camera[34].
[34].The
Thepixel
pixel
coordinate p
coordinate p in the image is detected using Hough transform algorithm [36] in this paper, and
in the image is detected using Hough transform algorithm [36] in this paper, and the
the
intrinsic parameters of the camera is calibrated
intrinsic parameters of the camera is calibrated by Zhangs method [34].by Zhangs method [34].
The
Thelightlightplane
planeequation
equationof ofeach
eachlaser
laserstripe
stripein in Figure
Figure99 can can be
be calibrated
calibrated beforehand
beforehandusing
usingZous
Zous
method
method [35]. Suppose that the light plane equations of two laser stripes in camera coordinatesystem
[35]. Suppose that the light plane equations of two laser stripes in camera coordinate system
{C}
{C}are,
are,
nTi XC = ci , i = 1, 2, (2)
n iT X C c i , i 1, 2 , (2)
where X C represents the 3 1 coordinate of any point on the light plane in {C}, ni is the 3 1 unit
where X
normal C represents
vector of the lightthe plane
3 1 coordinate
in {C}, and cof any point on the light plane in {C}, ni is the 3 1 unit
i is the directed distance between the optical center of the
normal vector of the light plane in {C}, and c
camera (i.e., OC in Figure 9) and the light plane.the directed distance between the optical center of the
i is
camera
Suppose(i.e., Othat
C in Figure
qij is 2 9) and the coordinate
1 pixel light plane.of the j-th point in the i-th laser stripe. Combining
Suppose that q is 2
Equations (1) and (2), the corresponding
ij 1 pixel coordinate
3D coordinateof the Q j-th point in the i-th laser stripe. Combining
ij (3 1 vector) in camera coordinate system
Equations (1)
{C} can be solved by, and (2), the corresponding 3D coordinate Q ij (3 1 vector) in camera coordinate system
{C} can be solved by, c
Qij = T i S(qij ). (3)
ni S(qij )
c
Qij T i S(qij ) . (3)
If we move the workpiece at a suitable speed ni Susing
(qij ) our experiment platform, and the camera is
controlled to be positioned exactly above the narrow butt joint (which will be discussed in Section 6),
the 3D If point
we move cloud the workpiece
data at a suitable
of the workpiece speed
surface canusing our experiment
be obtained according platform, and the
to Equation (3). camera
Supposeis
controlled to be positioned exactly above the narrow butt joint (which will be discussed in Section 6),
the 3D point cloud data of the workpiece surface can be obtained according to Equation (3). Suppose
that there are N 3D points and their coordinates in camera coordinate system {C} are G1, G2,, GN
respectively. G1, G2,, GN are all 3 1 vectors.
When the LSLS is on, we can obtain the 2D groove curve in the image. If is the 2 1 pixel
coordinate of any point in the groove curve, then according to Equation (1), the 3D coordinate (3
Sensors 2017, 17, 1099 9 of 15

that there are N 3D points and their coordinates in camera coordinate system {C} are G1 , G2 , . . . , GN
respectively. G1 , G2 , . . . , GN are all 3 1 vectors.
When the LSLS is on, we can obtain the 2D groove curve in the image. If is the 2 1 pixel
coordinate of any point in the groove curve, then according to Equation (1), the 3D coordinate (3 1
vector) in {C} corresponding to is located in the line,

= S( ), (4)

where is an unknown parameter. Our goal is to calculate the intersection point of the line
determined by Equation (4) and the surface represented by 3D point cloud G1 , G2 , . . . , GN .
Supposing that the workpiece surface near the line determined by Equation (4) is approximately
a flat plane, its equation in camera coordinate system {C} is,

nT KC = c, (5)

where KC represents the 3 1 coordinate of any point on the workpiece surface in {C}, n is the 3 1
unit normal vector of the workpiece surface near the line determined by Equation (4) in {C}, and c is
the directed distance between the optical center of the camera and the workpiece surface. n and c are
unknown parameters. The plane Equation (5) can be fitted using the points near the line determined by
Equation (4). Once the plane Equation (5) is obtained, we can calculate the relative position and pose
between the camera and the workpiece surface. Combining the calibrated transformation matrices
between {C} and {W}, we can finally obtain the relative position and pose between the welding torch
and the workpiece surface.
According to the analytic geometry theory, we can calculate the distance between Gk and the line
determined by Equation (4) as follows:

ST ( ) G k
d (Gk ) = k Gk S()k, k = 1, 2, , N, (6)
ST ()S()

and set a weight w(Gk ) for each point Gk . The weight w(Gk ) decreases as d(Gk ) increases, and once
d(Gk ) is larger than a certain threshold, w(Gk ) decreases to zero rapidly. For example, the weight w(Gk )
can be, ( )
[d(Gk )]2
w(Gk ) = exp , (7)
22

where is a parameter which determines the correlation between Gk and .


In this paper, we solve the equation of the workpiece surface by solving the following
optimization problem:
min g(n, c) = min 1 N w(G ) nT G c 2

k k
n,c n,c N k =1 , (8)
s.t knk = 1

where the length of n is constrained to 1 to prevent overfitting. Equation (8) minimizes the weighted
sum of the squared distances between Gk and the plane determined by (5).
We use Lagrange multiplier method to solve Equation (8) as follows:

1 N 2
h(n, c) = g(n, c) + (1 nT n) = w(Gk ) nT Gk c + (1 nT n), (9)

N k =1

where is the Lagrange multiplier.


Sensors 2017, 17, 1099 10 of 15

Calculate the partial derivatives of Equation (9) and set them to 0 as follows:
 
N

h(n,c)


n =2 1
N w(Gk )(Gk GTk n cGk ) n =0
k =1 . (10)
N
h(n,c)
= 2
w(Gk )(c GTk n) = 0


c N
k =1

After simplifying Equation (10), there is,




n = n
N


w (Gk )Gk , (11)

c = nT k=N1
w (Gk )


k =1

where is a 3 3 matrix defined as,

N N
T
w(Gk )w(Gs )(Gk Gs )(Gk Gs )
k =1 s =1
= . (12)
N
2N w(Gk )
k =1

Combining Equations (8), (11) and (12), there is,

g(n, c) = . (13)

According to Equations (8)(13), the optimization problem (8) is solved only when the Lagrange
multiplier is the minimum eigenvalue of the matrix and the normal vector n is the eigenvector
corresponding to . Since is a symmetric matrix, we can calculate its minimum eigenvalue and
the corresponding eigenvector n using Jacobian eigenvalue algorithm. Once the normal vector n is
determined, the unknown parameter c can be solved by Equation (11). Therefore, we can obtain the
plane equation nT KC = c near the line determined by Equation (4).
Combining Equations (4) and (5), the intersection point of the workpiece surface and the line
determined by Equation (4) can be calculated by,
c
= S( ). (14)
nT S()

The 3D position and normal vector n of the groove can be determined by Equations (14) and
(11) respectively. Thus, the 3D information of the narrow butt joint is obtained now.

6. 3D Path Teaching Experiments and Discussions


Based on the research work in the previous sections, we carried out the 3D path teaching
experiments using the platform shown in Figure 3. The workpiece with 3D narrow butt joint used
in this paper is shown in Figure 10. It was made of stainless steel and the groove curve was the
intersection curve of two cylinders.
During the path teaching process, the workpiece moves along the yW axis at a uniform speed.
In the meantime, the camera generates the trigger signal and makes the LSLS and CLLLS switch
on alternately. The industrial computer calculates the 3D position and normal vector coordinates
of the groove based on the captured images. Therefore, the deviation between the groove and the
center of FOV will be corrected using a PI controller. The output of the PI controller is the speed of
xW axis. The proportional and integral gain of the PI controller were set to 100 and 0.01 respectively.
The frame rate of the camera was set to 30 fps. The speed of yW axis was set to 2300 mm/min. In this
The 3D position and normal vector n of the groove can be determined by Equations (14) and
(11) respectively. Thus, the 3D information of the narrow butt joint is obtained now.

6. 3D Path Teaching Experiments and Discussions


Sensors 2017, 17, 1099 11 of 15
Based on the research work in the previous sections, we carried out the 3D path teaching
experiments using the platform shown in Figure 3. The workpiece with 3D narrow butt joint used in
way,paper
this the visual sensorin
is shown recognizes theItgroove
Figure 10. position
was made automatically
of stainless steel and
and the
the path teaching
groove curveprocess
was theis
finished. The flow of the path teaching
intersection curve of two cylinders. process is shown in the block diagram of Figure 11.

Sensors 2017, 17, 1099 11 of 15


(a) ( b)
proportional and integral gain of the PI controller were set to 100 and 0.01 respectively. The frame
Figure 10. The workpiece sample used in this paper. (a) The intersection curve of two cylinders; (b)
rate Figure
of the 10. The workpiece
camera was set tosample
30 fps.used
The in this paper.
speed (a) The
of yW axis wasintersection
set to 2300curve of two
mm/min. Incylinders;
this way, the
The CAD model of the workpiece sample. The unit of the dimensions in the figure is millimeter (mm).
(b) The CAD model of the workpiece sample. The unit of the dimensions in the figure
visual sensor recognizes the groove position automatically and the path teaching process is millimeter
is finished.
(mm).
The During
flow of the
the path
path teaching
teaching process,
process is shown in the block diagram of Figure 11.
the workpiece moves along the yW axis at a uniform speed. In
the meantime, the camera generates the trigger signal and makes the LSLS and CLLLS switch on
alternately. The industrial computer calculates the 3D position and normal vector coordinates of the
groove based on the captured images. Therefore, the deviation between the groove and the center of
FOV will be corrected using a PI controller. The output of the PI controller is the speed of xW axis. The

Figure 11. The block diagram representing the flow of the path teaching process.
Figure 11. The block diagram representing the flow of the path teaching process.

At any time t, suppose that PC (3 1 vector) and nC (3 1 vector) are the 3D position and normal
vector any time t, suppose
At coordinates respectively (3 1 vector)
that PinC camera coordinate C (3 1{C}
and nsystem vector) are the
detected 3D position
by the and normal
visual sensor; sx and
vector coordinates respectively in camera coordinate system {C} detected by the
sy are the displacements of the xW and yW axes respectively, which can be obtained from the xservovisual sensor; s and
smotors
y are the displacements of the x and
of the translational stage.WWe can y axes respectively, which can be obtained from
W calculate the corresponding 3D position PW (3 1 vector) the servo
motors of the translational stage. We
and normal vector nW (3 1 vector) in world can calculate the corresponding
coordinate system {W} at 3D time t = 0 by,PW (3 1 vector)
position
and normal vector nW (3 1 vector) in world coordinate system {W} at time t = 0 by,

sx
P RCW PC TCW sys x
P = RW P

+ TCW 0sy , (15)

W
CW C , (15)
n R n 0


W CW C
nW = RCW nC

where RCW and TCW are the 3 3 rotational and 3 1 translational transformation matrices from {C} to
{W}. RR
where CW and T CW are the 3 3 rotational and 3 1 translational transformation matrices from {C}
CW and TCW can be determined beforehand by the common hand-eye calibration method.
to {W}. R CW and to
According T CW can be determined
Equation (15), the 3D beforehand by the common
position coordinate and normalhand-eye
vectorcalibration method.
of each point in the
According to Equation (15), the 3D position coordinate and normal vector
groove curve can be calculated in world coordinate system {W} after path teaching. Therefore, we of each point in
the groove
obtain curve
the 3D can be calculated
reconstruction resultsinofworld coordinate
the groove curvesystem
in world {W} after pathsystem
coordinate teaching.
{W}.Therefore,
Figure 12
we obtain the 3D reconstruction results of the groove curve in world coordinate
shows the calculation result in one of our experiments. Another 100 experiments were conducted system {W}. Figure 12 as
shows the calculation result in one of our experiments. Another 100 experiments
well (different materials such as stainless steel and aluminum alloy, different surface roughness from were conducted
as
0.8well
m(different
to 12.5 m,materials such as relative
and different stainlessposition
steel andand
aluminum alloy, different
pose between surface
the welding roughness
torch and the
workpiece). All the experiment results show that deviations between our detection results and the
theoretical CAD model are not more than 0.24 mm and 0.54. These errors may be caused by the
machining errors and deformations of the workpiece. In addition, we used a coordinate measuring
machine (CMM) with about 4 m accuracy to measure the position of the actual groove curve. The
position deviation between our detection result and the CMM result does not exceed 0.05 mm. It
Sensors 2017, 17, 1099 12 of 15

from 0.8 m to 12.5 m, and different relative position and pose between the welding torch and the
workpiece). All the experiment results show that deviations between our detection results and the
theoretical CAD model are not more than 0.24 mm and 0.54 . These errors may be caused by the
machining errors and deformations of the workpiece. In addition, we used a coordinate measuring
machine (CMM) with about 4 m accuracy to measure the position of the actual groove curve.
The position deviation between our detection result and the CMM result does not exceed 0.05 mm.
It indicates that our method is suitable for narrow butt joint detection when there are machining errors
and deformations.
Sensors 2017, 17, 1099 12 of 15
Sensors 2017, 17, 1099 12 of 15

Figure
Figure 12.
12. The
The 3D
3D reconstruction
reconstruction result of the
result of the groove
groove curve
curve in
in one
one of
of the
the experiments.
experiments.
Figure 12. The 3D reconstruction result of the groove curve in one of the experiments.
We also simulated the path teaching process when there are assembly errors, as shown in Figure
We also
We also simulated
simulated the path
pathteaching
theworkpiece
teaching process when there are assembly errors, as as
shown in in
Figure 13.
13. At the beginning, the was process
placed onwhen
the there are assembly
translational stage errors, shown
and we performed Figure
a path
At
13. the beginning,
At theprocess
beginning,the workpiece
thefirst
workpiece was placed
was placed on the
on thetranslational
translational stage
stage and we performed a path
teaching for the time, as shown in Figure 13a. Then we lifted upand
onewesideperformed a path
of the workpiece
teaching
teaching process for the first time, as shown in Figure 13a. Then we lifted up one side of the workpiece
using a 5process
mm thick for plate,
the first
andtime, as shown
rotated in Figure 13a.
the workpiece Then
around zWwe lifted
axis, up onein
as shown side of the13b.
Figure workpiece
Finally,
using aa 55 mm
using mm thick
thick plate,
plate, and
and rotated
rotated the
the workpiece around zzW
workpiece around axis,as
W axis, asshown
shownin inFigure
Figure13b.
13b.Finally,
Finally,
path teaching was performed for the second time after movement.
path teaching was performed for the second time after
path teaching was performed for the second time after movement. movement.

(a) ( b)
(a) ( b)
Figure 13. The path teaching experiment scenes when there are assembly errors. (a) The scene before
Figure 13. The
movement andpath teaching
the LSLS experiment
is on; scenes
(b) The scene when
after there are
movement andassembly errors.
the CLLLS (a) The scene before
is on.
Figure 13. The path teaching experiment scenes when there are assembly errors. (a) The scene before
movement and the LSLS is on; (b) The scene after movement and the CLLLS is on.
movement and the LSLS is on; (b) The scene after movement and the CLLLS is on.
The 3D reconstruction results of the groove curve are shown in Figure 14. The 3D groove curves
The
before 3Dafter
and reconstruction results
movement were of the groove
recognized curve are
accurately. Theshown
heightindifference
Figure 14.along
The 3D groove
zW axis wascurves
about
beforeThe
and3D reconstruction
after movement results
were of the groove
recognized curve
accurately.are
The shown
heightin Figure
difference14. The
along 3D
z groove curves
W axis was about
5 mm between these two curves, which is exactly equal to the thickness of the lifted plate. The
5before and afterthese
mm between movement were recognized
two curves, which accurately.
is exactly The height difference along zW axis was
experiment results indicate that our proposed path equal to the
teaching thickness
method of the
is suitable lifted
for the plate. The
situations
about 5
experimentmm between these two curves, which is exactly equal to the thickness of the lifted plate.
where thereresults indicate
are assembly that before
errors our proposed
welding.path teaching method is suitable for the situations
The experiment
where results indicate
there are assembly errorsthat ourwelding.
before proposed path teaching method is suitable for the situations
where there are assembly errors before welding.
movement and the LSLS is on; (b) The scene after movement and the CLLLS is on.

The 3D reconstruction results of the groove curve are shown in Figure 14. The 3D groove curves
before and after movement were recognized accurately. The height difference along zW axis was about
5 mm between these two curves, which is exactly equal to the thickness of the lifted plate. The
Sensors 2017, 17,results
experiment 1099 indicate that our proposed path teaching method is suitable for the situations
13 of 15

where there are assembly errors before welding.

Sensors 2017, 17, 1099 13 of 15


(a)

( b)
Figure 14. The 3D reconstruction results of the groove curve before movement and after movement.
Figure 14. The 3D reconstruction results of the groove curve before movement and after movement. (a)
(a) Axonometric view of the reconstruction results; (b) Side view of the reconstruction results.
Axonometric view of the reconstruction results; (b) Side view of the reconstruction results.

7. Conclusions
7. Conclusions
This paper proposes a vision-aided 3D path teaching method before narrow butt joint welding.
This paper proposes a vision-aided 3D path teaching method before narrow butt joint welding.
Our designed visual sensor can capture images in different lighting conditions. When the LED
Our designed visual sensor can capture images in different lighting conditions. When the LED surface
surface light source is on, the grayscale of the narrow butt joint is quite different from the grayscale
light source is on, the grayscale of the narrow butt joint is quite different from the grayscale of the
of the base metal. After image processing, the 2D pixel coordinates of any point in the groove curve
base metal. After image processing, the 2D pixel coordinates of any point in the groove curve can
can be obtained in less than 5 ms. When the cross-line laser light source is on, the laser stripe image
be obtained in less than 5 ms. When the cross-line laser light source is on, the laser stripe image
can be captured by the visual sensor, and the 3D point cloud data of the workpiece surface can be
can be captured by the visual sensor, and the 3D point cloud data of the workpiece surface can be
obtained after calculation. We propose a synchronous acquisition method: two light sources are
obtained after calculation. We propose a synchronous acquisition method: two light sources are
triggered to switch on alternately and the camera is synchronized to capture images when each light
triggered to switch on alternately and the camera is synchronized to capture images when each
source is on. Compared with the traditional measurement methods using separate cameras, the
light source is on. Compared with the traditional measurement methods using separate cameras,
proposed synchronous acquisition method only uses one camera, which eliminates the error source
the proposed synchronous acquisition method only uses one camera, which eliminates the error
from calibration between two separate measurements. Different visual information can be fused to
source from calibration between two separate measurements. Different visual information can be
calculate the 3D position and normal vector coordinates of each point in the groove curve. 3D path
fused to calculate the 3D position and normal vector coordinates of each point in the groove curve.
teaching experiments were carried out to examine the applicability of our proposed method.
3D path teaching experiments were carried out to examine the applicability of our proposed method.
Experiment results show that the image resolution can reach 12.5 m at 30 mm working distance and
Experiment results show that the image resolution can reach 12.5 m at 30 mm working distance and
that the proposed method is suitable for 3D narrow butt joint recognition before welding, which can
that the proposed method is suitable for 3D narrow butt joint recognition before welding, which can
compensate the machining errors, assembly errors, deformations, etc. of the workpiece. Our research
compensate the machining errors, assembly errors, deformations, etc. of the workpiece. Our research
can be applied to automatic path teaching in complex 3D components in aerospace and precision
can be applied to automatic path teaching in complex 3D components in aerospace and precision
instrument industries. Further research will focus on the further development of our visual sensor
instrument industries. Further research will focus on the further development of our visual sensor
(such as avoiding the interference from tack welds, more accurate control algorithm during path
(such as avoiding the interference from tack welds, more accurate control algorithm during path
teaching, improving detection accuracy, etc.) and its applications in different welding methods (such
teaching, improving detection accuracy, etc.) and its applications in different welding methods (such
as GTAW, laser welding, friction stir welding, etc.).
as GTAW, laser welding, friction stir welding, etc.).
Acknowledgments: This work was supported by the National Natural Science Foundation of China under Grant
Numbers U1537205 and 51375257 and by Tsinghua University Initiative Scientific Research Program under
Grant Number 2014Z05093.

Author Contributions: Jinle Zeng designed the visual sensor, developed the image processing algorithms, and
wrote the paper; Baohua Chang and Dong Du supervised the overall work and provided writing instructions;
Guodong Peng, Shuhe Chang and Yuxiang Hong conducted the experiments; Li Wang collected the
experimental data; Jiguo Shan analyzed the data.
Sensors 2017, 17, 1099 14 of 15

Acknowledgments: This work was supported by the National Natural Science Foundation of China under Grant
Numbers U1537205 and 51375257 and by Tsinghua University Initiative Scientific Research Program under Grant
Number 2014Z05093.
Author Contributions: Jinle Zeng designed the visual sensor, developed the image processing algorithms,
and wrote the paper; Baohua Chang and Dong Du supervised the overall work and provided writing instructions;
Guodong Peng, Shuhe Chang and Yuxiang Hong conducted the experiments; Li Wang collected the experimental
data; Jiguo Shan analyzed the data.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Wang, X.; Xue, L.; Yan, Y.; Gu, X. Welding robot collision-free path optimization. Appl. Sci. 2017, 7, 89.
[CrossRef]
2. Zhang, Q.; Zhao, M. Minimum time path planning for robotic manipulator in drilling/spot welding tasks.
J. Comput. Des. Eng. 2016, 3, 132139. [CrossRef]
3. Ahmed, S.M.; Tan, Y.Z.; Lee, G.H.; Chew, C.M.; Pang, C.K. Object detection and motion planning for
automated welding of tubular joints. In Proceedings of the 2016 IEEE/RSJ International Conference on
Intelligent Robots and Systems, Daejeon, Korea, 914 October 2016; Volume 1, pp. 26102615.
4. Tang, Z.; Zhou, Q.; Qi, F.; Wang, J.X. Optimal transitional trajectory generation for automatic machines. Int. J.
Comput. Sci. Eng. 2016, 12, 104112. [CrossRef]
5. Tsai, M.J.; Lee, H.W.; Ann, N.J. Machine vision based path planning for a robotic golf club head welding
system. Robot. Comput.-Integr. Manuf. 2011, 27, 843849. [CrossRef]
6. Zeng, J.; Chang, B.; Du, D.; Hong, Y.; Zou, Y.; Chang, S. A visual weld edge recognition method based on light
and shadow feature construction using directional lighting. J. Manuf. Process. 2016, 24, 1930. [CrossRef]
7. Rossano, G.F.; Boca, R.; Nidamarthi, S.; Fuhlbrigge, T.A. Method and Apparatus for Robot Path Teaching.
US Patent 15/099895, 15 April 2016.
8. Yaskawa Simplified Welding Robot Programming. Available online: https://www.motoman.com/hubfs/
Kinetiq/KinetiqTeachingWhitePaper.pdf?hsCtaTracking=a7e996d4-b7d9--4f23--8abe-e6b8cb57b04b%
7C31e4a34e-c435--4dec-a84a-7403f2578ffa (accessed on 29 March 2017).
9. The KUKA Robot Programming Language. Available online: https://drstienecker.com/tech-332/11-the-
kuka-robot-programming-language (accessed on 29 March 2017).
10. Chen, H.P.; Fuhlbrigge, T.; Li, X.Z. A review of CAD-based robot path planning for spray painting. Ind. Robot
2009, 36, 4550. [CrossRef]
11. ABB RobotStudio. Available online: http://new.abb.com/products/robotics/robotstudio (accessed on 29
March 2017).
12. Simulation and Offline Programming of Motoman Robot Systems. Available online: http:
//www.motoman.se/news-events/news/news-single-view/?tx_ttnews%5Btt_news%5D=1815&
cHash=28a527fa4a41e7946b6d7b95a10182b3 (accessed on 29 March 2017).
13. KUKA.Sim Software. Available online: https://www.kuka.com/en-de/products/robot-systems/software/
planning-project-engineering-service-safety/kuka_sim (accessed on 29 March 2017).
14. Le, J.; Zhang, H.; Xiao, Y. Circular fillet weld tracking in GMAW by robots based on rotating arc sensors.
Int. J. Adv. Manuf. Technol. 2017, 88, 27052715. [CrossRef]
15. Gao, F.; Chen, Q.; Guo, L. Study on arc welding robot weld seam touch sensing location method for structural
parts of hull. In Proceedings of the 2015 International Conference on Control, Automation and Information
Sciences, Changshu, China, 2931 October 2015; Volume 1, pp. 4246.
16. Figueroa, J.F.; Sharma, S. Robotic seam tracking with ultrasonic sensors. Int. J. Robot. Autom. 1991, 6, 3540.
17. Svenman, E.; Runnemalm, A. A complex response inductive method for improved gap measurement in
laser welding. Int. J. Adv. Manuf. Technol. 2017, 88, 175184. [CrossRef]
18. Zeng, J.; Chang, B.; Du, D.; Hong, Y.; Chang, S.; Zou, Y. A precise visual method for narrow butt detection in
specular reflection workpiece welding. Sensors 2016, 16, 1480. [CrossRef] [PubMed]
19. Meta Vision System. Available online: http://www.meta-mvs.com (accessed on 29 March 2017).
20. Servo-Robot: Smart Laser Vision System for Smart Robots. Available online: http://www.servorobot.com
(accessed on 29 March 2017).
Sensors 2017, 17, 1099 15 of 15

21. Scout 6D Seam Tracking. Available online: http://www.roboticinnovations.co.za/sites/default/files/


META-SCOUT%20Booklet%20doppelseitig.pdf (accessed on 29 March 2017).
22. Scansonic TH6D Optical Sensor. Available online: http://www.scansonic.de/en/products/th6d-optical-
sensor (accessed on 29 March 2017).
23. Park, J.B.; Lee, S.H.; Lee, I.J. Precise 3D lug pose detection sensor for automatic robot welding using
a structured-light vision system. Sensors 2009, 9, 75507565. [CrossRef] [PubMed]
24. Huang, W.; Kovacevic, R. A laser-based vision system for weld quality inspection. Sensors 2011, 11, 506521.
[CrossRef] [PubMed]
25. He, Y.; Xu, Y.; Chen, Y.; Chen, H.; Chen, S. Weld seam profile detection and feature point extraction for
multi-pass route planning based on visual attention model. Robot. Comput.-Integr. Manuf. 2016, 37, 251261.
[CrossRef]
26. He, Y.; Zhou, H.; Wang, J.; Wu, D.; Chen, S. Weld seam profile extraction of T-joints based on orientation
saliency for path planning and seam tracking. In Proceedings of the 2016 IEEE Workshop on Advanced
Robotics and Its Social Impacts, Shanghai, China, 810 June 2016; Volume 1, pp. 110115.
27. Alberto, F.V.; Rodrigo, G.A.; Eduardo, A.A.; Antonio, C.L.; Daniel, F.G.; Ruben, U.F.; Jimenez, M.M.;
Sanchez, G.; Manuel, J. Low-cost system for weld tracking based on artificial vision. IEEE Trans. Ind. Appl.
2011, 47, 11591167.
28. Shen, H.; Lin, T.; Chen, S.; Li, L. Real-time seam tracking technology of welding robot with visual sensing.
Int. J. Adv. Manuf. Technol. 2010, 59, 283298. [CrossRef]
29. Chen, H.; Lei, F.; Xing, G.; Liu, W.; Sun, H. Visual servo control system for narrow butt seam. In Proceedings
of the 25th Chinese Control and Decision Conference, Guiyang, China, 2527 May 2013; Volume 1,
pp. 34563461.
30. Ma, H.; Wei, S.; Sheng, Z.; Lin, T.; Chen, S. Robot welding seam tracking method based on passive vision for
thin plate closed-gap butt welding. Int. J. Adv. Manuf. Technol. 2010, 48, 945953. [CrossRef]
31. Huang, Y.; Xiao, Y.; Wang, P.; Li, M. A seam-tracking laser welding platform with 3D and 2D visual
information fusion vision sensor system. Int. J. Adv. Manuf. Technol. 2013, 67, 415426. [CrossRef]
32. Zheng, J.; Pan, J. Weld Joint Tracking Detection Equipment and Method. China Patent 201010238686.X,
16 January 2013.
33. Zheng, J.; Liu, C.; Pan, J. Research on Butt Weld Detection Technology Based on Circular Spot. In Proceedings
of the 16th National Welding Conference of China, Zhenjiang, China, 10 October 2011.
34. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22,
13301334. [CrossRef]
35. Zou, Y. Research on Visual Method for Weld Seam Recognition and Tracking Based on Multi-Feature
Extraction and Information Fusion. Ph.D. Thesis, Tsinghua University, Beijing, China, October 2012.
36. Hough, P.V.C. Method and Means for Recognizing Complex Patterns. US Patent 3069654, 18 December 1962.

2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

S-ar putea să vă placă și