Sunteți pe pagina 1din 10

predicted before hand.

This paper is all


about the technology which predicts the
suicide bombers and explosion of
ABSTRACT
weapons through IMAGING FOR
CONCLEAD WEAPON DETECTION,
In the present scenario, bomb blasts are
the sensor improvements, how the
rampant all around the world. Bombs
imaging takes place and the challenges.
went of in buses and underground
And we also describe techniques for
stations, killed many and left many
simultaneous noise suppression, object
injured. Bomb blasts can not be
enhancement of video data and show technological challenge that requires
some mathematical results. innovative solutions in sensor
technologies and image processing.

The detection of weapons The problem also presents


concealed underneath a person’s challenges in the legal arena; a number
clothing is very much important to the of sensors based on different
improvement of the security of the phenomenology as well as image
general public as well as the safety of processing support are being developed
public assets like airports, buildings, to observe objects underneath people’s
and railway stations etc. Manual clothing.
screening procedures for detecting
concealed weapons such as handguns, IMAGING SENSORS
knives, and explosives are common in
controlled access settings like airports, These imaging sensors developed for
entrances to sensitive buildings and CWD applications depending on their
public events. It is desirable sometimes portability, proximity and whether they
to be able to detect concealed weapons use active or passive illuminations.
from a standoff distance, especially
when it is impossible to arrange the flow
of people through a controlled 1. INFRARED IMAGER:
procedure.
Infrared imagers utilize
the temperature distribution
information of the target to form
an image. Normally they are
INTRODUCTION: used for a variety of night-vision
applications, such as viewing
Till now the detection of concealed vehicles and people. The
weapons is done by manual screening underlying theory is that the
procedures. To control the explosives in
infrared radiation emitted by the
some places like airports, sensitive
human body is absorbed by
buildings, famous constructions etc. But
these manual screening procedures are clothing and then re-emitted by
not giving satisfactory results, because it. As a result, infrared radiation
this type of manual screenings can be used to show the image
procedures screens the person when the of a concealed weapon only
person is near the screening machine and when the clothing is tight, thin,
also some times it gives wrong alarm and stationary. For normally
indications so we are need of a loose clothing, the emitted
technology that almost detects the infrared radiation will be spread
weapon by scanning. This can be over a larger clothing area, thus
achieved by imaging for concealed decreasing the ability to image
weapons. a weapon.

The goal is the eventual 2. P M W IMAGING


deployment of automatic detection and SENSORS:
recognition of concealed weapons. it is a
FIRST GENERATION:

Passive millimeter wave


(MMW) sensors measure the
apparent temperature through
the energy that is emitted or
reflected by sources. The output
of the sensors is a function of
the emissive of the objects in
the MMW spectrum as
measured by the receiver.
Clothing penetration for
concealed weapon detection is
made possible by MMW sensors
due to the low emissive and Figure: 1(a) visible and 1(b) MMW
high reflectivity of objects like image of a person concealing 2 guns
metallic guns. In early 1995, the beneath a heavy sweater
MMW
data were obtained by means of
scans using a single detector SECOND GENERATION:
that took up to 90 minutes to Recent advances in MMW
generate one image. sensor technology have led to
Following figure1 (a) shows a video-rate (30 frames/s) MMW
visual image of a person cameras .One such camera is
wearing a heavy sweater that the pupil-plane array from Terex
conceals two guns made with Enterprises. It is a 94-GHz
metal and ceramics. The radiometric pupil-plane imaging
corresponding 94-GHz system that employs frequency
radiometric image figure1 (b) scanning to achieve vertical
was obtained by scanning a resolution and uses an array of
single detector across the object 32 individual wave-guide
plane using a mechanical antennas for Horizontal
scanner. The radiometric image resolution. This system collects
clearly shows both firearms. up to 30 frames/s of MMW data.
Following figure shows the
visible and second-generation
MMW images of an individual
hiding a gun underneath his
jacket. It is clear from the
figures 1(b), 2(b) that the image
quality of the camera is
degraded.
presented, a human operator is
able to respond with higher
accuracy. This demonstrates the
benefits of image fusion for the
CWD application, which
integrates complementary
information from multiple types
of sensors.

FIGURE 2(a) visual image 2(b) second-


generation image of a person concealing
a handgun beneath a jacket.

CWD THROUGH IMAGE FUSION:


By fusing passive MMW image
data and its corresponding
infrared (IR) or electro-optical
(EO) image, more complete
information can be obtained.
FIGURE 3: An example of image fusion
The information can then be for CWD. (a) Image1: visual (b) Image2:
utilized to facilitate concealed MMW (c) Fused Image.
weapon detection.

Fusion of an IR image
revealing a concealed weapon IMAGING PROCESSING
and its corresponding MMW ARCHITECTURE:
image has been shown to
facilitate extraction of the
concealed weapon. This is
illustrated in the example given
in following figure 3a) Shows an
image taken from a regular CCD
camera, and Figure3b) shows a
corresponding MMW image. If
either one of these two images
alone is presented to a human
operator, it is difficult to
recognize the weapon concealed
underneath the rightmost FIGURE 4: An imaging processing
person’s clothing. If a fused architecture overview for CWD
image as shown in Figure 3c) is
include registration and fusion
An image processing procedures.
architecture for CWD is shown in
Figure 4.The input can be multi 1) IMAGE DENOISING &
sensor (i.e., MMW + IR, MMW + ENHANCEMENT THROUGH
EO, or MMW + IR + EO) data or WAVELETS:
only the MMW data. In the latter
case, the blocks showing Many techniques have
registration and fusion can be been developed to improve the
removed from Figure 4. The quality of MMW images in this
output can take several forms. It section, we describe a technique
can be as simple as a processed for simultaneous noise
image/video sequence displayed suppression and object
on a screen; a cued display enhancement of passive MMW
where potential concealed video data and show some
weapon types and locations are mathematical results.
highlighted with associated De-noising of the video
confidence measures; a “yes,” sequences can be achieved
“no,” or “maybe” indicator; or a temporally or spatially. First,
combination of the above. The temporal de-noising is achieved
image processing procedures by motion compensated
that have been investigated for filtering, which estimates the
CWD applications range from motion trajectory of each pixel
simple de-noising to automatic and then conducts a 1-D
pattern recognition. filtering along the trajectory.
This reduces the blurring effect
WAVELET APPROACHS FOR that occurs when temporal
PRE PROCESSING: filtering is performed without
regard to object motion
Before an image or video between frames. The motion
sequence is presented to a trajectory of a pixel can be
human observer for operator- estimated by various algorithms
assisted weapon detection or such as optical flow methods,
fed into an automatic weapon block-based methods and
detection algorithm, it is Bayesian-methods. If the motion
desirable to preprocess the in an image sequence is not
images or video data to abrupt, we can restrict the
maximize their exploitation. The search to a small region in the
preprocessing steps considered subsequent frames for the
in this section include motion trajectory. For additional
enhancement and filtering for de-noising and object
the removal of shadows, enhancement, the technique
wrinkles, and other artifacts. employs a wavelet transform
When more than one sensor is method that is based on multi
used, preprocessing must also scale edge representation.
original frame. However,
spurious features such as glint
are also enhanced; higher-level
procedures such as pattern
Recognition has to be used to
discard these undesirable
features.

2) CLUTTER FILTERING:

Clutter filtering is used to


remove unwanted details
(shadows, wrinkles, imaging
FIGURE: 5a) original frame 5b) de- artifacts, etc.) that are not
noised frame 5c) de-noised and needed in the final image for
enhanced frame by wavelet approach human observation, and can
adversely affect the
performance of the automatic
In above figure 5(a), which recognition stage. This helps
shows a frame taken from the improve the recognition
sample video sequence, the performance, either operator-
concealed gun does not show assisted or automatic. For this
clearly because of noise and low purpose, morphological filters
contrast. The images in Figure have been employed. Examples
5(b) show the de-noised frame of the use of morphological
by motion-compensated filtering for noise removal are
filtering. The frame was then provided through the complete
spatially de-noised and CWD example given in Figure. A
enhanced by the wavelet complete description of the
transform methods. Four example is given in a later
decomposition levels were used section.
and edges in
The fine scales were detected
using the magnitude and angles 3) REGISTRATION OF MULTI-
of the gradient of the multi- SENSOR IMAGES:
scale edge representation. The
threshold for de-noising was As indicated earlier,
15% of the maximum gradient making use of multiple sensors
at each scale. Figure 5(c) shows may increase the efficacy of a
the final results of the contrast CWD system. The first step
enhanced and demised frames. toward image fusion is a precise
Note that the image of the alignment of images (i.e., image
handgun on the chest of the registration).
subject is more apparent in the Very little has been reported on
enhanced frame than it is in the the registration problem for the
CWD application. Here, we registration algorithm was
describe a registration approach developed for the registration of
for images taken at the same IR images and the
time from different but corresponding MMW images of
Nearly collocated (adjacent and the first generation. At the first
parallel) sensors based on the stage, two human silhouette
maximization of mutual extraction algorithms were
information (MMI) criterion. MMI developed, followed by a binary
states that two images are correlation to coarsely register
registered when their mutual the two images.
information (MI) reaches its
maximum value. This can be
expressed mathematically as
the following:

where F and R are the images to


be registered. F is referred to as
the floating image, whose pixel
coordinates (˜x) are to be
mapped to new coordinates on
the reference image R. The
reference image R is to be re-
sampled according to the
positions defined by the new
coordinates Ta( ˜x), where T
denotes the transformation FIGURE 6: A CWD EXAMPLE
model, and the dependence of T
on its associated parameters a
is indicated by the use of The purpose was to provide
notation Ta. I is the MI similarity an initial search point close to
measure calculated over the the final solution for the second
region of overlap of the two stage of the registration
images and can be calculated algorithm based on the MMI
through the joint histogram of criterion.
the two images the above In this manner, any local
criterion says that the two optimizer can be employed to
images F and R are maximize the MI measure.
One registration result obtained
registered through Ta* when a* by this approach is illustrated
globally optimizes the MI through the example given in
measure, a two-stage Figure 6.
example for CWD application is
given in Figure 7. Two IR images
taken from separate IR cameras
1V) IMAGE DECOMPOSITION: from different viewing angles
are considered in this case. The
The most straightforward advantage of image fusion for
approach to image fusion is to this case is clear since we can
take the average of the source observe a complete gun shape
images, but this can produce only in the fused image. The
undesirable results such as a second fusion example, fusion
decrease in contrast. Many of of IR and MMW images, is
the advanced image fusion provided in Figure 6.
methods involve multi resolution
image decomposition based on
the wavelet transform. First, an
image pyramid is constructed
for each source image by
applying the wavelet transform
to the source images. This
transform domain
representation emphasizes
important details of the source
images at different scales,
which is useful for choosing the
best fusion rules. Then, using a
feature selection rule, a fused
pyramid is formed for the FIGURE 7: (a) and (b) are original IR
composite image from the images (c) is fused image
pyramid coefficients of the
source images. The simplest AUTOMATIC WEAPON
feature selection rule is DETECTION:
choosing the maximum of the
two corresponding transform After preprocessing, the
values. This allows the images /video sequences can be
integration of details into one displayed for operator-assisted
image from two or more images. weapon detection or fed into a
Finally, the composite image is weapon detection module for
obtained by taking an inverse automated weapon detection.
pyramid transform of the Toward this aim, several steps
composite wavelet are required, including object
representation. The process can extraction, shape description,
be applied to fusion of multiple and weapon recognition.
source imagery. This type of
method has been used to fuse SEGMENTATION FOR OBJECT
IR and MMW images for CWD EXTRACTION
application [7]. The first fusion
Object extraction is an potentially play a key role in
important step towards addressing the concealed
automatic recognition of a weapon detection problem.
weapon, regardless of whether Recent advances in MMW sensor
or not the image fusion step is technology have led to video-
involved. It has been rate (30 frames/s) MMW
successfully used to extract the cameras. However, MMW
gun shape from the fused IR and cameras alone cannot provide
MMW images. This could not be useful information about the
achieved using the original detail and location of the
images alone. Segmented individual being monitored. To
result from the fused IR and enhance the practical values of
MMW image is shown in Figure passive MMW cameras, sensor
6. fusion approaches using MMW
and IR, or MMW and EO cameras
CHALLENGES: are being described. By
integrating the complementary
There are several information from different
challenges ahead. One critical sensors, a more effective CWD
issue is the challenge of system is expected.
performing detection at a
distance with high probability of REFERENCES:
detection and low probability of
false alarm. Yet another 1. An Article from “IEEE SIGNAL
difficulty to be surmounted is PROCESSING MAGAZINE” March
forging portable multi-sensor 2005 pp. 52-61
instruments. Also, detection
systems go hand in hand with 2. www.wikipedia.org
subsequent response by the
operator, and system 3. N.G.Paulter, “Guide to the
development should take into technologies of concealed
account the overall context of weapon imaging and
deployment. detection,” NIJ Guide 602-00,
2001
CONCLUSION:
4. www.imageprocessing.com
Imaging techniques based on a
combination of sensor
technologies and processing will

S-ar putea să vă placă și