Sunteți pe pagina 1din 23

2008

ECE 510: Radar &


Sonar Processing
Spring, 2008
Sonar Processing

A MATLAB code that written to apply signal processing on sonar data
provided by Dr. Lisa Zurk. This code used to produce spectrogram image,
beam-time-record, and apply the sonar equation to evaluate the source level
SL, noise level NL, and the SNR signal-to-noise ratio. This code is for
passive sonar system assumes using uniformly linear arrays (ULA).

Yousef Qassim
Qassim_Youssef@hotmail.com
6/5/2008
This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

Contents

Chapter1: Project Statement3

Chapter2: Background..3
Section2.1: Sonar.3

Section2.2: Beamforming..5

Section2.3: Delay-and-Sum Beamforming.7

Section2.4: Tapered Beamforming.9

Chapter3: Results and Discussion.9

Section3.1 Lofagram Image for a Single Hydrophone..9

Section3.2: Calculation Using the Passive Sonar Equation..10

Section3.3: Beam-Time-Record Images at 100, 300, and 400 Hz Using ULA and
Chebyshev Tapered Beamforming11

Section3.4: Discussion of BTR Images and Sonar Equation Calculations..11

Section3.5: Image Shows the Actual Array Elements in Space and the Actual Spacing
between the Elements14

Section3.6: BTR Images Using the Actual Elements Position.15

Section3.7: ULA BTR Images and the Actual Elements Positions BTR Images
Comparison ..17

Chapter4: Conclusions.17

Chapter5: MATLAB Code18

Chapter6: References...23

2 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

1) Project Statement:
The project is a MATLAB code that was written to perform sonar processing on
data provided by Dr. Lisa Zurk. The data was recorded by an array fielded as part
of the ONR Shallow Water Array Performance (SWAP) program. These data
recorded using 32 element bottom-mounted hydrophone array deployed off the
Florida coast at 1 kHz frequency, and it represents a passive sonar system. The
following files array_shape.mat and SWAP_sonar_data.mat contain the
locations of the array hydrophones and the data respectively. The code is aimed
to generate spectrogram images, beam-time-record (BTR) images, and applying
the passive sonar equation on the data assuming that we are using linear arrays.
Finally, we will provide discussion on the results obtained by the MATLAB code.


2) Background:

2.1) Sonar:
SONAR is the short term for Sound Navigation and Ranging. Sonar is used
for communication, navigation, and detection of targets by sending sound
waves and or listening to the returning echoes. There are two kinds of
sonar active sonar and passive sonar.

Active sonar is consists of transmitter and receiver. When the two in the
same place its called monostatic, which is usually the case of most of sonar
systems. Other types are bistatic and multistatic sonar systems. Active
sonar generates pulses or pings of sound, and then listens for the echo and
reflections, see Fig1. Using several hydrophones it can measures the
distance to an object, the Doppler, etc. Active sonar performance can be
measured by two equations: the first one is used in presence of ambient
noise (where noise is limited), and the other one is used where
reverberation is limited
SL - 2TL + TS - (NL -DI) = DT (1)
SL - 2TL + TS = RL+DT (2)

3 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

SL is the source level, TL is the transmission loss, TS is the target strength,


NL is the noise level, DI is the directivity index, RL is the reverberation level,
and finally DT is the detection threshold or it can be substituted by SNR
which is signal to noise ratio. All values are in dB.


Fig1: Active Sonar System [1]

Passive sonar listens to the echo in the water without transmitting sound
waves. The passive sonar is usually used in military applications. Passive
sonar uses different techniques to identify targets usually done by a
computer system. A computer system uses large databases to identify the
targets by matching the received sounds with these databases. However,
the sonar operator usually performs the final classifications. In the passive
sonar systems the hydrophones usually towed by ships or submarines so
its mostly limited because of the noise generated by the vehicle in addition
to the noise of the water environment. The performance of passive sonar is
defined by the following equation, notice that one way propagation is
involved

SL - TL + TS - (NL -DI) = DT (3)

Sonar is used in many applications ranging from military, to civil, and to


scientific applications. Military applications are anti-submarine warfare,
torpedoes, mines, submarines, aircrafts, and mine countermeasures. Civil
applications are fisheries, echo sounding, and net location. Finally scientific
applications are biomass estimation, wave measurement, water velocity
measurement, sub-bottom profiling, and many others.

4 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

2.2) Beamforming:

One of the important aspects of the signal processing is the beamforming.


In many applications as communications, radar, and sonar the information
have to be extracted and detected from array of sensors is the components
of desired signal. The data may be the existence of the signal as in sonar, or
the components of the signal as in wireless communication systems.
Common name for the beamforming is the steering and it could be
electronically or mechanically.


Fig2: Visualization of beamformer

Beamforming operation acts as a spatial and temporal filter, the weighting


process constructs signals from desired directions and destructs the other
signals. This allows us to linearly combine the signals from the sensors with
a certain weights and reconstruct the arriving signal from specific angle.

In active systems the beamformer control the phase and amplitude of each
transmitted pulse or wave to produce a certain pattern with constructive
and destructive interference in the wavefront. At the receiver signals from
different sensors is combined to form the expected pattern. In active
system receiver and passive sonar the beamformer sum the delayed signals
from each sensor at slightly different times. This done so each received
signal reaches the output at the same time making the total received signal
as one strong signal. The signals from each sensor can be amplified by
different weight. Different weighting patterns can be used to construct the
desired sensitivity patterns; example of this is Dolph-Chebyshev weighting

5 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

patterns. A mainlobe, sidelobes, and nulls are produced and they could be
controlled which give the advantage of ignoring noise or jammers from
specific direction and listening for signals in other directions. Fig3 shows a
beamformer with mainlobe, sidelobes and nulls.


Fig3: Beamformer pattern with mainlobe, sidelobes and nulls [6].

Beamforming techniques can be classified into two major categories


conventional beamformers and adaptive beamformers. The conventional
beamformers uses a set of weights and time delays that are usually fixed to
combine the signals from the sensors in the array. These beamformers are
primarily use only the information about the location of sensors in the
medium (air or water) and the directions of desired signals. The
conventional beamforming is also known as the delay-and-sum
beamformer. In other hand, the adaptive beamformers adapt the weights
to optimize the spatial response of the resulting beamformer based on
particular criterion typically to improve rejection of unwanted signals from
other directions beside the desired ones. Examples of adaptive
beamforming are null-steering beamformer and frequency domain
beamformer.

6 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

Sonar arrays in active and passive systems could be classified base on


dimension. One-dimension arrays also called linear arrays are usually towed
behind ships or marines and used in multi-element passive systems or in
single or multi-element side scan sonar. Two-dimensional arrays also
known as planar arrays are common in both active and passive. Three
dimensional arrays cylindrical and spherical arrays are used in sonar demos
in the modern submarine and ships.


Fig4: Linea array sensors [6]

2.3) Delay-and-Sum Beamforming:


The delay-and-sum beamformer use the fact that the distance from each
sensor in the array to the source is different. This means that the signal at
each sensor is a shifted replica from other sensors. Assuming the source is
in the far field thats mean its wavefront appears planar to the array. The
time delay a signal takes to reach each sensor is equal to the travel time of
the plane wave arriving at angle from a reference sensor. This delay can
be easily computed by using Equation 4, see Fig5 for details.
!! !"# ()
! = !
(4)

Where ! is the distance from the reference sensor, and c is the speed of the plane wave.

7 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim


Fig5: Wavefront arrive at different sensors with time delay equal to the travel time of plane wave [3]

The delay-and-sum beamformer add a time delay to the received signal


from each sensor that is equal and opposite to the delay caused by the
travel time, so it can determine the signal origin. The summing of these in-
phase signals will end in accumulative interference that will amplify the
result by the number of sensors in the array. The time delay that should be
added could be found by running iteratively test time delays for all possible
directions. Fig6 shows a delay-and-sum beamformer scheme.


Fig6: Delay-and-sum beamformer [4].

2.4) Tapered Beamforming:


When the desired signal exist beside the sensor thermal noise, the spatial
filters could do perfect job. But in many cases the desired signal exists
beside other undesired signals. In addition to that interference may occur,
where some signals may propagate at the same frequency as our system.
All of these results in a limited ability of extraction the desired signal. These

8 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

could be resolved by using adaptive beamforming. However, this also could


be solved in non-adaptive manner using tapered beamforming.

Unlike the delay-and-sum beamforming, which use weights that differ in


phase while the amplitude, remain the same resulting in high sidelobes
level around the mainlobe. This causes interferences to come through the
sidelobes and degrade the output of the beamformer. The tapered
beamforming helps in reducing the sidelobes level and achieve the desired
beampattern. The tapered beamforming uses a conventional filter as
Dolph-Chebyshev filter to get the desired design of the beampattern.

3) Results and Discussion:


In this section I will provide some comments and discussions on my results. As
mentioned before, the MATLAB code was written to process the sonar data
and provide some signal processing results. The results being produced here
are the lofagram image of one sensor (hydrophone), BTR images, and the
results of applying passive sonar equation on the data. Also in this section I will
discuss some of the functions that were being used in the MATLAB code.

3.1) Lofagram Image for a Single Hydrophone:


Using the spectrogram function we produce the lofagram for a single
phone, which is the 10th phone in this case. The spectrogram was produced
using 4 sec snapshots, 50% overlapping, 4096 points FFT, and hamming
window. The spectrogram computes the short-time Fourier transform and
returns the spectrogram of the input signal vector. It plots the power
density function of the segments of the input vector using surf function.
The input for my spectrogram function was the data from the 10th phone,
using a window of 4000 samples (4sec snapshots @ 1 kHz sampling
frequency), 2000 over lapping segment, 4096 points FFT, and sampling
frequency of 1 kHz. Fig7 shows the lofagram of the 10th sensor in the array
of 32 sensors.

9 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

th

Fig7: Lofagram of the 10 hydrophone

We can notice in this picture that the intensity of the picked signals at the
10th phone is at the highest levels between 0-300secs and its start fading
after that. Which means regardless how many targets we have here it is (or
they are) moving away while the time is increasing. The blue values
represent the area where there are no signals could be picked because they
are outside the range of the hydrophones linear array.

3.2) Calculation Using the Passive Sonar Equation:


The passive sonar system is represented by equation no.3:

SL - TL + TS - (NL -AG) = SNR (3)

Assuming the transmission loss TL=75 dB. Array gain AG=15 dB calculated
using 10*log10 (M), where M is number of phones in this case M=32. The
source level SL is calculated from the loudest source echo level added to
the transmission level, so we found that SL=195.7 dB. Noise level is

10 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

calculated by adding echo level estimated from the color bar of Fig7 at a
quiet place (= 60 dB) added to array gain AG, this results in NL=75 dB.
Finally, the SNR=120.7 dB and is founded by taking the difference between
SL and NL.

3.3) Beam-Time-Record Images at 100, 300, and 400 Hz Using


ULA and Chebyshev Tapered Beamforming:
In order to extract the desired signal from the linear array we have to use a
beamforming vector commonly known as steering vector. In my code I used
conventional beamforming, called Chebyshev tapered beamforming which
is a modified version of the delay-and-sum beamforming. The tapered
beamforming allows us to control the width of mainlobe and lower the
level of the sidelobes, which give a better performance of extracting the
desired signal coming from certain angle with the existence of noise and
jammer signals. The main scheme of the code was obtained from D.G.
Manolakis book Statistical and Adaptive Signal Processing then been
modified to get the Chebyshev tapered match filter. The beamformer was
made assuming that we are using uniformly linear array (ULA), and the
source transmitting plane wavefront signal that mean the source is in the
far field. Also using the beam angle convention where =0 at the positive y-
direction and =90 at the positive x-direction. After that, I compute the
spectrogram for all phones at frequency bins 100, 300, and 400 Hz. Then I
sort each frequency bin in its corresponding matrix named X100, X300, and
X400 resulting in matrices of 32 elements X 449 snapshots for each
frequency bin. The beam-time-record obtained by multiplying Chebyshev
tapered by each of these matrices. Fig8, Fig9, and Fig10 show the BTR
images for following frequencies 100 Hz, 300 Hz, and 400 Hz respectively.

3.4) Discussion of BTR Images and Sonar Equation Calculations:


By looking at the figures (Fig8, Fig9, and Fig10) we can see that there are
strong signals (sources) exist at the same bearing angles (locations). We can
notice also that there is one strong source at bearing angle 90 that was

11 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

picked by the hydrophones linear array beside many other week sources.
We cant till the exact number of sources exist in these figures but we know
that these data was recorded at the coast of Florid. So, this gives us a hunt
that many ships may exist there. Going back to the bearing angle 90 where
the strongest source exists, we can conclude that the source start at the
end fire of the linear array because the intensity of the source is the highest
there. While time progress, we can notice the source intensity decreases
leading us to a simple conclusion that the source is going away from the
linear array. Because of the cone angle ambiguity of the uniformly linear
array we cant differentiate if the source is moving to the right or to the left
of the ULA.

Another observation should be discussed here, that the beamforming


resolution gets better proportionally to the increase in frequency. The
better the resolution the better ULA distinguish between closely spaced
sources. Since the array resolution is dependent on the wavelength (c/f),
and on the spacing between array elements (d) or disproportionally to the
(d/wavelength) this will results in different resolution as appear in the
figures. In my code the spacing was fixed to d=1.5 m and wavelength
change depending on the frequency bin that being computed.


Fig8: BTR image at 100 Hz

12 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim


Fig9: BTR image at 300 Hz


Fig10: BTR image at 400 Hz

13 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

I calculate the SL, NL, and SNR for each figure (Fig8, Fig9, and Fig10). The
source level was computed by finding the loudest EL and adding TL to find
SL=183.2, 174.4, 172.7 dB for 100, 300, 400 Hz respectively. Noise level was
computed by taking the mean value of the EL over a quiet area and adding
AG to it, NL= 54.5, 41.4, 44.3 dB. Finally SNR was computed by taking the
difference between SL and NL, SNR=125.7, 133, 128.4 dB.

3.5) Image Shows the Actual Array Elements in Space and the
Actual Spacing between the Elements:
This part of code is meant to show the actual array elements positions in a
plain view and reflect the linear distances between the elements. In Fig11
the red line shows the actual positions of the elements and the blue line
shows the actual spacing between elements.


Fig11: The actual array elements positions and the actual spacing between them


14 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

3.6) BTR Images Using the Actual Elements Position:


Using the actual array elements positions and assuming the source is in the
far field R= 20 km. I recomputed the calculation for the BTR images. First I
generate source points with different receiving angles, where Xs= R*sin (),
Ys=R*cos (), and is a vector of different angels. Then I compute the
distances between each source point (with specific receiving angle) and
each element in the array. After that, I use these distances to recalculate
the Chebyshev tapered beamformer. Finally, by taking multiplication
between the Chebyshev taper and the matrices corresponding to different
frequency bins (100, 300, and 400 Hz) BTR results were regenerated. Fig12,
Fig13, and Fig14 show the BTR images using the actual array elements
positions and assuming the source is in the far field R=20 km. A brief
discussion will be provided in the next section 3.7.


Fig12: BTR image using the actual array elements positions at 100 Hz.


15 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim


Fig13: BTR image using the actual array elements positions at 300 Hz.


Fig14: BTR image using the actual array elements positions at 400 Hz.

16 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

3.7) ULA BTR Images and the Actual Elements Positions BTR
Images Comparison:
By comparing the results obtain in section 3.3, and section 3.6. We can
notice that the results in general are alike. This mean in both cases we can
see that the linear array pick (receive) a strong source at bearing angle
=90 and it start moving away while time progress. We also could notice
the existence of week sources that may represent other objects (ships,
boats, etc). The main difference I could notice in these figures that the
resolution at 100 and 300 Hz is better in the second case using actual array
elements positions, while the resolution at 400 Hz in both cases is the same
and the result match greatly. I believe the difference in resolution between
the two cases for 100 and 300 Hz figures is related to the fact we didnt
take into the consideration the distance between the source and the linear
array in the first case while we did in the second one. It seems the second
case scheme in taking the distance between the source and each element
in the array and using these distances to create a beamformer is more
consistent than the scheme used in the first case. But usually in passive
sonar the only information we have is the array elements positions so its
impossible to calculate the distances between the target source and the
array elements positions.

4) Conclusions:
Sonar system mainly divided into active and passive systems, where
active consist of transmitter and receiver while the passive is consist of
receiver only.
Beamforming broadly divided into two categories conventional and
adaptive beamforming. The delay-and-sum is an example of the
conventional one. In this project we use the tapered beamforming,
which is considered as conventional beamforming because its originally
based on the delay-and-sum beamforming.

17 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

The beamforming usually consist of mainlobe, sidelobes, and nulls. The


tapered beamforming used to lower the level of sidelobes which results
in better performance.
The aperture for the ULA is the distance between the first and last
elements. The larger the aperture the finer the resolution of the array.
The resolution is dependent on the frequency being used and then is
dependent on the wavelength. The resolution is disproportional to the
ratio of (d/wavelength).
The source level for ships is between 170-185 dB.
Passive sonar usually doesnt have any information rather than the array
elements positions.

5) MATLAB Code:
The MATLAB code used to produce these results is provided here.
%% ------------------------------------------------------------------------
%% SPRING 2008: 6/5/2008
%% 510: Radar and Sonar
%% Project: MATLAB code to process the sonar data and produce outputs
%% Dr. Lisa Zurk
%% ------------------------------------------------------------------------
%% Yousef Qassim
%% ID:989800497
%% qassim_youssef@hotmail.com
%% ------------------------------------------------------------------------
%% Data preparation and parameters setting
%% ------------------------------------------------------------------------
clear all;
close all;
load 'Swap_sonar_data.mat'; % load data
load 'array_shape.mat'; % load data
fs=1e3; % sampling frequency
Ts=1/fs; % sampling time
M=32; % number of hydrophone
%% ------------------------------------------------------------------------
%% 1)Image showing a singe phone lofagram as funtion of frequency and time
%% ------------------------------------------------------------------------
% Produce Spectrogram with 4 sec snapshots and 50% overlapping and 4096 FFT
% points for any hydrophone in the array

% Change the array type from single to double no effect on the actual data
% but its represtentation require more space in memory
Csamples=double(samples); % Casted samples
SnapshotTime=4; % Taking Snapshot every 4 sec
Phone10=Csamples(:,10); % Select the 10th phone

18 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

WindowSize=SnapshotTime*fs; % Window size is equal to 4000 sample
% # of overlapping segments (2000 segment) to produce 50% overlapping
OLS=WindowSize/2;
nfft=4096; % Number of FFT points

% Spectrogram for the first phone


[s,f,t,p]= spectrogram(Phone10,WindowSize,OLS,nfft,fs);
s=s./(nfft/2);% normalization of spectrogram data
p=p./(nfft/2);
10*log10(abs(max(max(p))))
figure
surf(f,t,10*log10(abs(p')),'EdgeColor','none'); % Produce the lofagram
% Show the result in the XY plan
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Frequency(Hz)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('Lofagram (Spectrogram) of 10th Hydrophone','FontWeight','bold');
%% ------------------------------------------------------------------------
%% 2)Calculate the estimated source level using the sonar equation
%% ------------------------------------------------------------------------
TL=75; % TL transmission loss in dB
AG=10*log10(M) % Array gain(15.05dB)
EL1=20*log10(abs(max(max(s)))); % Strong echo level(=120.78dB)
EL2=60; % Quit echo level estimated from the colorbar of spectrogram(=65dB)
SL=EL1+TL % Source level(=195.78dB)
NL=EL2+AG % Noise level(=75.05B)
SNR=SL-NL % SNR(=120.73dB)
%% ------------------------------------------------------------------------
%% 3)BTR (Beam-Time-Record) Images @ 100, 300, and 400 Hz for ULA
%% ------------------------------------------------------------------------
% Matrix of zero to save the values of spectrogram for all phones at 100 Hz
X100=zeros(32,449);
% Matrix of zero to save the values of spectrogram for all phones at 300 Hz
X300=zeros(32,449);
% Matrix of zero to save the values of spectrogram for all phones at 400 Hz
X400=zeros(32,449);

% Loop to compute the spectrogram for all hydrophones


for i=1:32 % Loop with the number of phones
% Saving the spectrogram values only at 100, 300, and 400Hz frequencies
[s,f,t,p]= spectrogram(Csamples(:,i),WindowSize,OLS,[100,300,400],fs);
X100(i,:)=s(1,:); % Values of spectrogram for all phones at 100 Hz
X300(i,:)=s(2,:); % Values of spectrogram for all phones at 300 Hz
X400(i,:)=s(3,:); % Values of spectrogram for all phones at 400 Hz
end

X100=X100./(nfft/2); % normalization of spectrogram data


X300=X300./(nfft/2); % normalization of spectrogram data
X400=X400./(nfft/2); % normalization of spectrogram data

phi=0:180/20:360; % Vector of phi angles between 0 & 360 degrees


Nbeams=length(phi); % Length of vector phi
c_mf=zeros(32,41); % The match filter array for the 32 phones
c_tbf=zeros(32,41);% The Chebychev taper match filter array
r=100; % Sidelobe level

19 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

% Chebychev Taper Vector of 32 phones and r sidelobe level
tv = chebwin(M,r);
c=1500; % Speed of sound 1500 m/s
d=1.5; % Spacing between elements
wl=[c/f(1) c/f(2) c/f(3)]; % Wavelength at 100, 300, and 400 Hz

% Creating steering vector of 32 phones for Nbeams (different angles)


for k=1:Nbeams
u_s=(d/wl(1))*sin(phi(k)*pi/180);
% Match filter (delay-and-sum beamformer)
c_mf=exp(-j*2*pi*u_s*(0:(M-1)))./sqrt(M);
% Tapered beamformer
c_tbf(:,k)=tv.*c_mf';
end

% Output of multiplication operation between the data @ 100 Hz and the


% match filter
Y100=c_tbf'*X100;
% BTR image @ 100 Hz
figure
surf(phi,t,20*log10(abs(Y100')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=100 Hz','FontWeight','bold');

% Creating steering vector of 32 phones for Nbeams (different angles)


for k=1:Nbeams
u_s=(d/wl(2))*sin(phi(k)*pi/180);
c_mf=exp(-j*2*pi*u_s*(0:(M-1)))./sqrt(M);
c_tbf(:,k)=tv.*c_mf';
end

% Output of multiplication operation between the data @ 300 Hz and the


% match filter
Y300=c_tbf'*X300;
% BTR image @ 300 Hz
figure
surf(phi,t,20*log10(abs(Y300')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=300 Hz','FontWeight','bold');

% Creating steering vector of 32 phones for Nbeams (different angles)


for k=1:Nbeams
u_s=(d/wl(3))*sin(phi(k)*pi/180);
c_mf=exp(-j*2*pi*u_s*(0:(M-1)))./sqrt(M);
c_tbf(:,k)=tv.*c_mf';
end

% Output of multiplication operation between the data @ 400 Hz and the


% match filter
Y400=c_tbf'*X400;

20 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

% BTR image @ 400 Hz
figure
surf(phi,t,20*log10(abs(Y400')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=400 Hz','FontWeight','bold');
%% ------------------------------------------------------------------------
%% 4)Discussion of the images including how many sources are present and
%% where they are relative to the array. Calculation of the sonar equation
%% including array gain and comparison to values in the plot
%% ------------------------------------------------------------------------
% There is one target that have strongest received signal and it's start at
% the end fire of the sensors array, then it's start moving to the right or
% the left of the array. We couldn't distinguish if the target is moving
% to the right or the left because we are using linear array that it
% deosn't have the ability to do so. Also, we could see that there are
% several other weak signals that picked from other targets but we can't
% till exactly how many are they.

% SL, NL, and SNR calculation for the results at different frequencies
SL100=20*log10(abs(max(max(Y100))))+TL %SL @ 100 Hz(=183.2dB)
NL100=20*log10(abs(mean(min(Y100(10:15,350:400)))))+AG%NL @ 100 Hz(=54.5dB)
SNR100=SL100-NL100 %SNR @ 100 Hz(=125.7dB)

SL300=20*log10(abs(max(max(Y300))))+TL % SL @ 300 Hz(=174.4dB)


NL300=20*log10(abs(mean(min(Y300(10:15,350:400)))))+AG%NL @ 300 Hz(=41.4dB)
SNR300=SL300-NL300 % SNR @ 300 Hz(=133.0dB)

SL400=20*log10(abs(max(max(Y400))))+TL % SL @ 400 Hz(=172.7dB)


NL400=20*log10(abs(mean(mean(Y400(10:15,350:400)))))+AG%NL @ 400 Hz(=44.3dB)
SNR400=SL400-NL400 % SNR @ 400 Hz(128.4=dB)

% We can see that the result almost match the ones obtained in the Lofagram
% for one hydrophone(sensor) at all frequencies
%% ------------------------------------------------------------------------
%% 5)Plot of the actual array elements position reflect the linear
%% distances betweeen them
%% ------------------------------------------------------------------------
distance=zeros(31,1); % Vector of distances between elements
Xvec=zeros(32,1); % XVector to represent the linear distances
Xvec(1)=x(1); % It start at the first point of the real x vector
Yvec=zeros(32,1); % YVector of zeros to reflect the linear distances

for i=1:31 % loop to calculate the distances between elements


distance(i)=sqrt((x(i+1)-x(i))^2-(y(i+1)-y(i))^2); % Distances
Xvec(i+1)=Xvec(i)+distance(i); % creating vector of points
end
% Plot the the real position of the array elements and the linear distances
% between the elements
figure
plot(x,y,'-*r')
hold on
plot(Xvec,Yvec,'-*b')
xlabel('X-Axis','FontWeight','bold');

21 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

ylabel('Y-Axis','FontWeight','bold');
title('Array Actual Elements postion and the Linear Distances'...
,'FontWeight','bold');
legend('Actual Array Elements Position','Array Elements Linear Spacing',...
'Location','NorthWest');
%% ------------------------------------------------------------------------
%% 6)Compute the BTR images using the actual element positions and assuming
%% the source is in the far-field
%% ------------------------------------------------------------------------
% Create a source points at distnace R with different angles
R=20e3; % Distance from source to the array(=20km)
x_s=R*sin(phi*pi/180); % x-points for the sources
y_s=R*cos(phi*pi/180); % y-points for the sources
% Distance matrix contain the distance between each source and each sensor
dis=zeros(Nbeams,M); % 41 angle and 32 sensor

% Loop to compute the distances between sources and the sensors


for counter1=1:Nbeams % 41 beam angles
for counter2=1:M % 32 sensors
dis(counter1,counter2)=sqrt((x_s(counter1)-x(counter2))^2 ...
+(y_s(counter1)-y(counter2))^2);
end
end

% Creating steering vector of 32 phones for Nbeams (different angles)


for counter=1:Nbeams
u_s=(1/wl(1))*sin(phi(k)*pi/180);
c_mf=exp(-j*2*pi*u_s*(dis(k,:)))./sqrt(M);
c_tbf(:,k)=tv.*c_mf';
end

% Output data @ 100 Hz


Y100=c_tbf'*X100;

% BTR image @ 100 Hz


figure
surf(phi,t,20*log10(abs(Y100')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=100 Hz','FontWeight','bold');

% Creating steering vector of 32 phones for Nbeams (different angles)


for counter=1:Nbeams
u_s=(1/wl(2))*sin(phi(k)*pi/180);
c_mf=exp(-j*2*pi*u_s*(dis(k,:)))./sqrt(M);
c_tbf(:,k)=tv.*c_mf';
end

% Output data @ 300 Hz


Y300=c_tbf'*X300;

% BTR image @ 300 Hz


figure

22 | P a g e

This report is not for distribution. Only for class work 6/5/2008 Yousef Qassim

surf(phi,t,20*log10(abs(Y300')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=300 Hz','FontWeight','bold');

% Creating steering vector of 32 phones for Nbeams (different angles)


for counter=1:Nbeams
u_s=(1/wl(3))*sin(phi(k)*pi/180);
c_mf=exp(-j*2*pi*u_s*(dis(k,:)))./sqrt(M);
c_tbf(:,k)=tv.*c_mf';
end

% Output data @ 400 Hz


Y400=c_tbf'*X400;

% BTR image @ 400 Hz


figure
surf(phi,t,20*log10(abs(Y400')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=400 Hz','FontWeight','bold');

6) References:
1. http://en.wikipedia.org/wiki/Main_Page; 6/3/2008
2. http://cnx.org/content/m12563/latest/ ; 6/3/2008
3. http://cnx.org/content/m12516/latest/ ; 6/3/2008
4. Gail L. Rosen,ULA DELAY-AND-SUM BEAMFORMING FOR PLUME
SOURCE LOCALIZATION, Drexel University, Philadelphia, PA 19104.
5. Joseph J. Sikora, Sound Propagation around Underwater Seamounts,
master thesis, MIT, Augest 2005.
6. D.G Manolakis, Statistical and Adaptive Signal Processing, ARTECH
HOUSE, INC. Norwood, MA, 2005.

23 | P a g e

S-ar putea să vă placă și