Documente Academic
Documente Profesional
Documente Cultură
Research Groups
Informatics Research Group
Digital Signal Processing Research Group
Ionosphere and Radio Propagation group
Programme
The time of Halloween last year coincided with an intense period of solar activity.
On October 28 occurred the 4th largest solar flare since records began in 1976.
Solar flares emit huge amounts of high energy photons in the EUV and X-Ray
bands; in the case of this flare, increasing the soft X-Ray output of the sun by around
3000%. Such an output must have a significant effect on the ionosphere, which is largely
ionised by the solar photons. This study aims to better understand the changes in
ionospheric parameters during a solar flare using both data, from the EISCAT incoherent
scatter radar and the GOES satellites, and modelling.
The October 28, X17.2 Solar Flare in 195Å extreme ultra violet
The electron production rate and the effective recombination rate coefficient are
important parameters, not only do they give the electron density but they also offer clues
about the concentrations of different species in the ionosphere. The apparent reduction in
effective recombination rate in the D-region during the flare, seen in this study as well as
others, is thought to be due to an abundance of diatomic ions such as N2+ and O2+.
Given time, these would combine with other species and in doing so increasing their
recombination rate. In the quiet time D-region, the vast majority of ions are polyatomic,
with relatively high recombination rates.
Artificial aurora: observations and modelling
Mina Ashrafi - Ionosphere and Radio Propagation Group
Sun is the source of electromagnetic radiation over a wide spectral range, a continuous
stream of plasma and bursts of energetic particles. The ionosphere is produced primarily
by ionisation action of high energy range of solar radiation and auroral energetic particles
that strike the upper atmosphere and make the region partly ionised plasma. Ionisation
appears at a number of atmospheric levels producing different regions known as the D, E,
and F region which suffer from ionisation by different solar radiation energies. In
addition, in high latitudes earth magnetic filed lines extend and under circumstances can
connect to the interplanetary magnetic field lines. As a result, energetic charged particles
carried by the solar wind can enter high latitude ionosphere, they excite the atmospheric
constituents to higher energy levels which result in producing optical emissions or auroral
lights.
The highly conductive ionosphere due to the existence of the charged particles can carry
electrical currents as well as reflect, deflect and scatter radio waves. The scientific
establishment of the conducting layer in the upper atmosphere took place in early 1900
when Marconi was successful in transmitting the radio signals across the Atlantic. The
idea of radio waves propagating across the curvature of earth’s surface by reflecting from
the ionosphere was suggested first by Kennely and heaviside in 1902. In less than a
decade radio broadcasting techniques was rapidly developed. In 1933 the first signs of a
non-linear effect in the ionospheric radio propagation were observed after construction of
a high power transmitter in Luxemburg. The modulation of the Luxemburg station could
be heard on the background of a broadcasted signal from other radio stations. It was
suggested that high power radio waves transmitted by this station changed radio
propagation characteristics of the ionosphere. This was the first time that the interaction
of a radio wave with the ionosphere as a medium was suggested.
Objective
High-power high-frequency radio waves cause plasma turbulence when beamed into the
ionosphere. This causes several phenomena, for example, artificial optical emissions due
to electron acceleration, ion-line backscatter enhancements in incoherent scatter radar
spectra due to Langmuir wave turbulence, and stimulated electromagnetic emissions
(SEE) due to wave coupling on plasma irregularities. On 12 November 2001, the
EISCAT Heating facility, pumping in O-mode transmissions at 5.423 MHz and 550 MW
effective isotropic radiated power with the pump beam dipped 9 degrees south, produced
novel artificial auroral rings.
The rings appeared immediately at pump-on and collapsed into blobs
after ~60 s whilst descending in altitude. Similar altitude descent effects were observed in
the EISCAT UHF radar ion-line enhancements. Likewise the SEE spectra changed as the
altitude approached the fourth electron gyro-harmonic frequency. Optical recordings
were made from Skibotn, Norway (69.35 N, 20.36 E) at 630 and 557.7 nm, and from
Ramfjord (69.59 N, 19.23 E) in white-light. The altitudes of the initial optical ring and
steady-state blob have been estimated.
The location and evolution in altitude and characteristics of the optical emissions has
been compared to the ion-line enhancements and the approach to the fourth gyro-
harmonic indicated by the SEE spectra. Initial results show that all three phenomena have
similar altitude morphology.
Ionospheric parameters such as electron density and temperature and ion temperature can
be derived from the incoherently backscattered ion–line spectra using conventional
methods, whereas heating in the ionospheric F-region with frequencies close to the
electron gyro-harmonics can give rise to the enhancements in the ion-line spectra, in
which case the spectra can not be analysed. In the above experiment the pump frequency
corresponds to the fourth electron gyro-harmonic frequency at 215 km altitude. The
EISCAT UHF radar observed a pump-induced overshoot in the ion-line data at the HF
reflection altitude.
The optical and radar signatures of HF pumping started at ~235 km and descended to
~215 km within ~60 s. This effect has been modelled using the solution to differential
equations describing pump-induced electron temperature and density perturbations. The
final temperature inside the heated volume has been calculated. The model results are
compared to EISCAT radar data.
References
Introduction
Two kinds of absorption are recognized in the ionosphere, deviative and nondeviative.
Deviative absorption occurs when wave spends a long time in the absorption layer and
wherever on the ray path where significant bending takes place. Nondeviative absorption
occurs in the regions where the magnetic permeability is around unit ( µ ≈ 1) and
Nν is large( N is electron density and ν is collision frequency) which fits with D
layer of ionosphere. This type of absorption is measured by riometers and has strong
dependency to the frequency. When data are obtained at different frequencies it is
converted to a reference frequency (usually 30 MHz) as an inverse square law:
A( 30 MHz ) = A( f ) (30 / f ) 2 dB
Riometer (Relative Ionospheric Opacity METER) is a ground-based receiver which
measures cosmic noise absorption(CNA) occurred at the 80-90 km altitude of ionosphere.
Riometers usually operate at frequencies of 20 to 50 MHz .Imaging Riometer for
Ionospheric Studies (IRIS) in Finland is of riometers being used for measuring high
latitude absorption. The Kilpisjärvi IRIS imaging riometer in northern Finland (69.05° N,
20.79° E) is supervised by Lancaster University (UK) and operated in conjunction with
Sodankylä Geophysical Observatory (SGO), Finland. It has been in operation since 2nd
September 1994.The system operates at 38.2 MHz and produces an array of 49 narrow
beams with widths between 13° and 16°. The basic scanning interval of the array is one
second. Figure 1 shows the IRIS array of antenna.
High Latitude Absorption, Also is known as Auroral Absorption (AA), occurs in auroral
o o
zone (centered about 67 geomagnetic latitude and about 10 wide in latitude).
Appleton and colleagues discovered auroral absorption during international polar year
(1933). Auroral absorption normally has rapid changes during times of day but the events
happen about magnetic local midnight are intense and sharp. It is caused by electron
precipitation in the range from 10 to 100 keV at altitudes of 70-100 km generally
followed by the substorm onset in the midnight sector. These events happen during a
couple of minutes. Figure 2 illustrates a typical night time auroral absorption.
Auroral absorption may be accompanied with occurrence of aurora in the ionosphere.
Aurora is due to excitation of upper atmospheric gas particularly O2 and N2 by energetic
particles mostly electrons. Auroras are observable by eyes (Figure 3)
Figure3 Aurora
Summary of works have been done so far
450 events have been studied. They occurred during 15-24 UT (MLT at 2115 UT)
in the period of 1994-2003. Events are analysed both in time and frequency domain
( Figure 4).Frequency analysis has been done using Morlet wavelet. Besides absorption
data, variations of components of magnetic field and AL index are considered for each
event. These statistics studies have been done:
Future Work
Abstract
A new type of imaging riometer system based on a Mills Cross antenna array is currently
under construction by the Ionosphere and Radio Propagation Group, Department of
Communication Systems, Lancaster in collaboration with the Max-Planck-Institut für
Aeronomie, Germany. The system will have an unprecedented spatial resolution in a
viewing area of 300x300km.
The Mills Cross system considered in this paper provides at least 4 times the resolution
which can be achieved (with the same number of antennas) with a filled array antenna
system. However, the cross correlation technique employed for producing narrow pencil
beams adds a considerable amount of complexity to the system which requires the use of
state-of-the-art FPGA signal processing technology.
First measurements have indicated that antenna sidelobes introduce phase delays that
result in signal reduction/increase especially in the presence of a strong noise source
(radio star). Possible techniques to minimise the effects of the sidelobes will be
presented.
Introduction
The riometer (relative ionospheric opacity meter) determines the radio-wave absorption
in the ionosphere by measuring the received cosmic-noise power. The expected variation
of background noise over a sidereal day is usually referred to as the quiet-day curve
(QDC). The ionospheric opacity is deduced from the difference between the QDC and the
received noise power. Absorption images may be produced by utilising a number of
spatially-distributed narrow beams.
Existing riometers are either widebeam riometers consisting of a single antenna element
above a conducting ground plane with resulting beam widths of the order of 60°, or
imaging riometers made up of up to 256 equally spaced antenna elements that form a
square additive phased filled array. Lancaster University’s Imaging Riometer for
Ionospheric Studies (IRIS) is an example of such an imaging riometer as described in [2].
With its 8x8 antenna array IRIS achieves an angular resolution of 16° at the zenith which
translates to an area of about 25x25 kilometres at 90km height. IRIS utilises a two-stage
matrix of modified 8 port Butler matrices [3] to form a total of 49 simultaneous beams.
Figure 1: Physical layout of several Mills Cross and filled array antenna
configurations
Plans for a new high-resolution riometer have been outlined in [4]. Advances in signal
processing technology will enable the Mills Cross technique to be employed for riometry
work for the first time. A cross of two perpendicular arms of 32 antennas each (totalling
63 antennas), see figure 1, will perform equally to a filled square antenna array of 16x16
antennas (totalling 256 antennas).
Experiment description
The Advanced Riometer Imaging Experiment in Scandinavia (ARIES) system consists of
a cross of two perpendicular arms of 32 antennas each (figure 1).
For the experiment results discussed in the following sections, a subset of only 16x16
antennas was used in an effort to reduce the complexity of the initial tests. The signals
from the two arms of 16 antennas each are fed into a lossless combiner to produce two
perpendicular fan-shaped beams pointing towards zenith. A set of phasing cables enabled
swinging the beams to an alternate, predetermined ‘worst-case’ direction. This is the
direction where strong signals are received by the sidelobes whereas the pencil beam
created by the cross correlation stage looks at a quiet part of the sky.
The signals from the combiners were fed into two receivers. The receivers employed an
in-phase/quadrature sampling technique. The receiver bandwidth was 1MHz,
considerably wider than the bandwidths that are used with existing filled array riometers
(IRIS: 250kHz). The resulting output was digitised using a high-speed A/D converter
board that plugs into a standard PC. The board is capable of continuously and
synchronously sampling 4 analogue channels at up to 10MHz sampling rate and 12 bit
resolution. All experiments described below were carried out with a sampling rate of
2.2MHz.
Cross correlation of the two resulting complex signals to find the signal from the area of
sky common to both fan beams was carried out digitally. The integrated results of the
cross correlation stage were stored for an initial integration time of 0.5s.
Experimental results
A sample dataset for October 30th, 2002 can be found in figure 2. The top panel shows
widebeam data. The middle two panels show data as recorded by the two fan beams. The
bottom panel is the result of cross correlating the signals from the two fan beams to
derive the signal common to both fan beams, referred to as pencil beam.
−104
dbm
−106
−108
−110
00 04 08 12 16 20 00
E−W FAN BEAM (2502)
−100
calibrated measurement
−102 simulation
−104
dbm
−106
−108
−110
00 04 08 12 16 20 00
N−S FAN BEAM (2503)
−100
calibrated measurement
−102 simulation
−104
dbm
−106
−108
−110
00 04 08 12 16 20 00
offset corrected PENCIL BEAM (ABS)
−100
10s res
−102 200s res
simulation
−104
dbm
−106
−108
−110
00 04 08 12 16 20 00
The cause of this observation was found to be the sensitivity of the cross correlator to
phase differences in the input signals that are to be cross correlated.
It is the phase difference between the signals received from the two fan beams that is
responsible for the occurrence of minima where we would expect to find maxima. One
example is the minimum at around 15:10 UT. At that time we have Cygnus passing
through the main lobe of the NS fan beam, and through the second sidelobe of the EW
fan beam. The reading is much reduced due to the strong anticorrelation between the
signal received from Cygnus by the two fan beams.
Remedy
In order to attain high quality data, the sidelobes will have to be reduced to a level far
below the one that was used during the initial experiment and in fact far below the
sidelobe levels that are commonly achieved with today’s filled array riometers. Currently,
two techniques are primarily being investigated to accomplish this: tapering and adaptive
beam steering.
Tapering
Tapering, i.e. attenuating the signals from the individual antenna elements of the phased
array according to a given ‘windowing function’, prior to combining them in the
beamformer, can in theory reduce the level of the sidelobes to any extent. Tapering is
straightforward to implement, and the exact tapering function can even be modified on-
the-fly if a digital beamforming system is used. Apart from triangular tapering which
originally promised to enable the 32x32 Mills Cross array to achieve the same results in
terms of spatial resolution and sensitivity as a 16x16 filled array, other tapering functions
can be used. See figure 3 for some examples. The goal in this case is to find the optimum
solution that sufficiently suppresses the sidelobes while maintaining a main beam narrow
enough to achieve the wanted spatial resolution.
Summary
For the first time, meaningful pencil beams have been obtained from a Mills Cross type
riometer system. A Mills Cross antenna array connected to low-noise, high-gain receivers
with a wide dynamic range and a fully digital beamformer and cross correlator is capable
of achieving the same resolution as a filled phased array antenna whilst requiring a much
smaller number of antennas (factor 4 in the case of a 32x32 cross).
0
−20
−30
−40
−50
−3 −2 −1 0 1 2 3
∆ angle between aerials [rad]
360
untapered
triangular
−180
−360
−3 −2 −1 0 1 2 3
∆ angle between aerials [rad]
Figure 3: Amplitude and phase response for several tapered linear phased arrays
Bibliography
[4] E. Nielsen and T. Hagfors. Plans for a new rio-imager experiment in Northern
Scandinavia. Journal of Atmospheric and Solar-Terrestrial Physics, 59(8):939-949,
1997.
** This presentation is based on a paper by Grill et al. [1] presented at the ISCTA’03.
Managing Mobility
Introduction
As computer systems are being applied to higher number of aspects in personal and
professional life, the quantity and complexity of software systems is increasing rapidly.
At the same time, the diversity in hardware architectures remains large and is likely to
grow with the deployment of future mobile systems, embedded systems, PDAs, and
portable computing devices. To add to the complexity, the wide range of networks that
mobile systems operate on, through roaming or migration, poses challenges in the
management of resources. To manage such diversity of software and hardware,
middleware technologies that isolate the underlying platforms from the higher-level
applications have been developed.
Wireless CORBA
The basic design principles concentrate on the client-side ORB transparency and
simplicity, because, transparency of the mobility mechanism to non-mobile ORBs is one
of the primary design constraints. Wireless CORBA does not support solutions that
would require modifications to a non-mobile ORB in order for it to interoperate with
CORBA objects and clients running on a mobile terminal. In other words, a stationary
(non-mobile, or fixed network) ORB does not have to implement this specification in
order to interoperate with CORBA objects and clients running on mobile terminals.
Wireless CORBA architecture identifies three different domains: home domain, visited
domain, and terminal domain. The Home Domain for a given terminal is the domain that
hosts the Home Location Agent of the terminal. A Visited Domain is a domain that hosts
one or more Access Bridges through which it provides ORB access to some mobile
terminals. The Terminal Domain consists of a terminal device that hosts an ORB and a
Terminal Bridge through which the objects on the terminal can communicate with objects
in other networks.
RT CORBA aims at solving problems associated with the allocation of resources and the
end-to-end predictability of system execution. RT CORBA is an optional set of
extensions to CORBA. To ensure that real-time requirements of a system are met, all
parts of the system must behave deterministically and if they combine predictably. The
interfaces and mechanisms provided by RT CORBA specifications facilitate a predictable
combination of the ORB and the application. The application manages the resources by
using the RT CORBA interfaces and the ORB’s mechanisms coordinate the activities that
comprise the application.
The Real-Time ORB relies upon the RTOS to schedule threads that represent activities
being processed and to handle resource contention. In RT CORBA threads are given
priorities. When a remote call is sent, the thread priority will be passed from client to
server. If the priority scheme implemented in the server is not CORBA compatible, the
priority will be mapped on to the local OS.
• The scheduling parameter elements associated with the chosen discipline may be
changed at any time during execution;
• The schedulable entity is a distributable thread that may span node boundaries, carrying
its scheduling context among scheduler instances on those nodes.
Outline of Presentation:
This presentation will initially introduce CORBA and explain how platform and language
independencies are achieved. Then it will be shown how CORBA can be used to manage
mobility issues and resources in mobile communication systems.
Multimedia Multiplexing Protocol for future wireless/wired
communication networks
Vasileios Zarimpas - Informatics Research Group
In this thesis we investigate two applications for linear block codes; the first, the more
common one, is error protection in communication systems, the second deals with
security of communications, particularly public key cryptography.
Second in communication security, we look into Public Key Cryptosystems (PKC) which
are based on the problem of general decoding for linear codes, in particular McEliece
PKC. We focus on PKC based on Rank codes2. A new family of rank codes, called
reducible rank codes is presented. We present PKC based on Reducible Rank Codes. We
present different trade-offs for using a combined system for error protection and
cryptography.
The general decoding problem for linear codes is NP-complete3, as been shown by
Berlekamp, et al (1978). In fact, the difficulty of this problem has been made use of in
cryptography by McEliece (1978), when he introduced his public key cryptosystem
1
Regular LDPC-matrices have equal number of ones per column, column weight, and equal number of
ones per row, row weight. Irregular matrices may have different column weights or/and different row
weights.
2
Codes that are based on the rank metric rather than the Hamming metric
3
A problem is assigned to the NP (non-deterministic polynomial time) class if it is verifiable in polynomial
time by a non-deterministic Turing machine. (A non-deterministic Turing machine is a "parallel" Turing
machine which can take many computational paths simultaneously, with the restriction that the parallel
Turing machines cannot communicate.) A P-problem (whose solution time is bounded by a polynomial) is
always also NP. If a problem is known to be NP, and a solution to the problem is somehow known, then
demonstrating the correctness of the solution can always be reduced to a single P (polynomial time)
verification. If P and NP are not equivalent, then the solution of NP-problems requires (in the worst case)
an exhaustive search. Linear programming, long known to be NP and thought not to be P, was shown to be
P by L. Khachian in 1979. It is an important unsolved problem to determine if all apparently NP problems
are actually P. A problem is said to be NP-hard if an algorithm for solving it can be translated into one for
solving any other NP-problem. It is much easier to show that a problem is NP than to show that it is NP-
hard. A problem which is both NP and NP-hard is called an NP-complete problem.
(http://mathworld.wolfram.com/NP-Problem.html)
(PKC).
Public key cryptography is one relatively new branch of cryptography. Whereas LDPC,
turbo codes, etc are termed as channel coding schemes, whose objective is to correctly
deliver the information from the transmitter to the receiver, cryptography on the other
hand deals with the security of communication. The fundamental objective of
cryptography is to enable two people, usually referred to as Alice and Bob, to
communicate over an insecure channel in such a way that an opponent cannot understand
what is being said. Until the seventies known cryptosystems could be categorized as
conventional or secret key cryptosystems. These cryptosystems use one key for
encryption and decryption and have very efficient algorithms and are highly secure. Their
main problem lied in the need for distributing the key used over a secure channel before
the communication through the public or insecure channel can proceed.
In 1976, Diffie and Hellman tried to solve the problem of key management by
introducing the idea of public-key system, where the encryption (made public) key is
different from the decryption (kept private for each user) key. The algorithm should
enable any person to encrypt a message using the intended receiver’s encryption key,
while only the receiver (who has the corresponding decryption key) can decrypt the
message. The first realization of a public-key system came in 1977 by Rivest, Shamir,
and Adleman, who invented the well-known RSA Cryptosystem [Rivest et al (1977)].
Since then, numerous public-key cryptography algorithms have been proposed. Many of
these are insecure. Of those still considered secure, many are impractical. Either they
have too large a key or the ciphertext is much larger than the plaintext. Only a few
algorithms are both secure and practical.
In 1978 Robert McEliece proposed a public-key cryptosystem based on algebraic coding
theory, in particular the difficulty of the general decoding problem. The Algorithm makes
use of the existence of a class of error correcting codes, known as Goppa codes. Although
the algorithm was one of the first public-key algorithms, and there were no successful
cryptanalytic results against the algorithm, it has never gained wide acceptance in the
cryptographic community. The scheme is two or three orders of magnitude faster than
RSA, but has some problems. The public key is enormous: 219 bits long, compared to 210
bits in RSA. The data expansion is large: the ciphertext is twice as long as the plaintext,
whereas in RSA ciphertext equals the plaintext.
Following the idea of applying the problem of decoding a general linear error-correcting
code in public key cryptosystems, two other classes of codes were used; Reed-Solomon
codes by Niederreiter (1986), and Maximum Rank Distance (MRD) codes by Gabidulin
(1991). However, of the three systems, Rank codes-based cryptosystem offers a
significantly smaller possible key size. This is mainly due to its dependence on the rank
metric rather than the Hamming metric. The size of the public key for a secure
Niederreiter version of McEliece-cryptosystem is about 218 bits, whereas the size of the
public key for secure Rank-codes cryptosystem one is about 214 bits.
The first Rank-codes based-cryptosystem [Gabidulin et al (1991)] applies Maximum
Rank distance MRD codes [Gabidulin (1985)]. Gibson (1993)and(1995) attacked the
system for small parameters and proved that the key size needs to be at least about 55296
∼ 2 bits, in GF(2 ) in order to have a secure system.
16 48
Finally, we propose a new idea to attempt breaking McEliece PKC based on Goppa
Codes, by applying LDPC-decoding algorithms. The idea is mainly to search for a parity
check matrix that defines the null space of the encryption code and which can perform
well with the sum-product iterative decoding algorithm. We first calculate the parity
check matrix in standard echelon form from the public key matrix. This matrix form does
not perform well with the sum-product algorithm. We then propose and discuss methods
to transform the matrix in the standard echelon form into another form that is more suited
to the sum-product algorithm.
AN IMPROVED WATERMARKING TECHNIQUE FOR
THE CASTING OF DIGITAL SIGNATURES
I. Katsaros - Informatics Research Group
Abstract
Abstract
Two Dimensional Optical Storage (TwoDOS) discs are being developed in which channel discs
are arranged on a 2D hexagonal lattice. The aim is to increase capacity by a factor of two and data
rate by a factor of 10 over “Blu-Ray Disc” technology. A further increase in capacity and data can
be realized by adding another ‘dimension’ to writing data, such as using multiple levels instead of
the two levels (pit and land) used in the TwoDOS discs. In this presentation number of signal
processing issues are going to be discussed such as non-linear and linear channel models, concept
of the noise in optical medium, symbol detection, coding, etc.
TwoDOS optical technology is based on a so-called broad spiral along, which information is
written as a limited number of parallel data rows stacked upon each other and arranged in a
coherent 2D format, with no spacing in between the rows. A guard-band consisting of one row of
known land symbols is located between successive revolutions of the broad spiral. Multilevel
TwoDOS data can be written on the optical medium either as a land symbol, which is the flat
reflecting surface of the disc, or as M possible pit symbols of varying radii. 4-level TwoDOS
consists of one land and three pit symbols. The land-symbol is the flat reflecting surface within a
hexagonal cell. Pit-symbols can be mastered as pit holes with varying pit radii r and a fixed
phase-depth Φ centered within the hexagonal cell that is available for each symbol. The reflected
light traveling to the bottom of a pit and back out produces a phase-depth of Φ = 2π (2d / λ ′)
with respect to light reflected from the all-land area. λ ′ is the wavelength of the laser light inside
the cover-layer. For simplicity the phase-depth is chosen to be Φ = π , which results in d = λ ′ / 4 .
Data will be read-out in parallel from the spiral using an array of laser spots, which after being
diffracted from the data pattern on the disc, can be detected by an array of photo-detectors. The
sampled signal waveforms are the result of diffraction of the laser beam within each spot. Symbol
detection, based on the signal levels obtained from a channel model, is performed on these signal
waveforms to estimate the original data written to the disc. In order to determine the feasibility of
manufacturing multidimensional TwoDOS discs and determine how much more gain in capacity
this technology can provide, it is important to start with an accurate simulation of the signal
waveforms that would be obtained during read-out of these discs. The signal waveforms will
assist in developing bit detection and coding technologies. The most accurate model would be
based on vector diffraction, but due to its computational complexity, a model based on the
assumption of scalar diffraction is presented. The explicit dependence of the model on the
channel bits makes this model very suitable for signal processing purposes. The model is also
very convenient to assess the importance to the signal waveform of non-linear contributions,
which are significant in multidimensional TwoDOS due to the close packing of the bit cells.
Unlike the non-liner channel model, the linear channel model is simple from a signal processing
point of view because it is less complex in comparison with non-linear signal processing models
such as the vector diffraction model or the scalar diffraction model. However the linear model
does not take into account non-linear effects such as pit asymmetry. For multilevel TwoDOS, we
assume linear ISI has a relatively higher influence than non-linear contributions. We can also
assume that Pre-compensation Iteration Process can remove the non-linear ISI. Traditionally, in
one-dimensional optical storage (CD, DVD and BD), read-out channels are often simulated by a
linear model, which is characterized by its Modulation Transfer Function (MTF) as derived in the
Braat-Hopkin’s formula. For the purpose of TwoDOS, the formalism is extended to the 2D
character we would be shown in the presentation.
Media noise is caused by imperfections in the optical layer of the optical medium. In this paper
we categorize media noise into two predominant forms of imperfection: (i) variations in the pit
areas of the symbol cells with respect to the intended or nominal sizes this is described as pit-size
noise, (ii) while mastering data symbols on the optical disc, pits are exactly centered within the
symbol cell thus resulting in pit-position noise.
Viterbi based algorithms that simultaneously processes multiple rows of set of 2D data, have been
proposed in order to reduce the complexity of traditional full-fledged Viterbi. Multi-track Viterbi
Algorithm (MVA) defined as a demodulator that processes multiple tracks of a full-surface signal
and the complex problem of performing symbol detection over meta-spiral is broken down into a
number of bit-detectors each processing along set of adjacent tracks. We take Stripe-wise Symbol
Detector, which as MVA is based on concatenation scheme of interconnected Viterbi detectors
each performing on a subset of the rows of the meta-spiral and passing their outputs as side
information for subsequent stripes. It was shown that originally the Viterbi algorithm could be
used as Maximum Likelihood Sequence detector (MLSD) for ISI channels with Additive With
Gaussian Noise (AWGN). But in applications, which channel noise is correlated with correlation
statistics that are signal dependant, the Viterbi algorithm is not MLSD and we have extended the
branch matrix calculations in order to accommodate the effect of the media noise.
Ultra Wideband Radio Technology and Applications
Ultra Wideband (UWB) technology, useful for both communication and sensing
applications, has the potential to provide solutions for many of today’s problems in the
areas of spectrum management and radio system engineering.
Radio spectrum is very scarce, finite and its lower band is considered to be fully utilised.
UWB has the potential to address this problem and revolutionise radio communications,
radar and positioning. It allows co-existence with the already licensed operators in the
lower band of the radio spectrum and can also be used in the higher band as well.
Their inherent potentials have attracted rowing interest as a viable candidate for short-
range high speed indoor radio communication services. UWB radio technology could
play an important role in the realization of future pervasive and heterogeneous
networking.
Semantic-based Document Clustering
Tony Evans - Digital Signal Processing Research Group
Introduction:
The World Wide Web is a vast repository of information in which webpages are searched
for using search engine applications that return results based on keyword frequency.
Since no semantic information is attached to webpages, a typical search query would
return a large number of results containing links to webpages of differing context in
regard to the keyword. As a result a user may have to search through a large number of
results before finding a webpage that contains the keyword in the correct context.
An improvement to current search engine results would be to group search results into
semantically similar clusters that are based on the context in which the keyword that is
found in the webpage appears. This is done by not simply counting the frequency of
keyword occurrences, but also by analysing words surrounding the keyword, and their
appearance in other webpages. The result would be a set of clusters of webpages that are
semantically similar. For example, a search for a word with several meanings, such as
“cold” would return results that are grouped in terms of weather, illness, the cold war,
and emotions.
The aim of my current project is to develop a server side search application that utilizes
the google API beta release, to retrieve a set of links to webpages based on a keyword
query and to improve on the search results by regrouping the search results into
semantically similar clusters. This would be done by pulling webpage data of each search
result from the google cache and performing comparisons on the sentences that the
keyword occurs in so that similar webpages are grouped together. Clustering search
results according to their semantic meaning is a topic associated with the “Semantic
Web”.
Search results must be fetched in a timely manner in order for the application to be
feasible. In order to achieve this I use the latest beta release of the google API, which
allows programmers to write applications that communicate with the google servers
directly using SOAP (Simple Object Access Protocol) which is set of conventions for
invoking code using XML over HTTP, similar to RPC calls. The google API provides
methods for search queries and fetching copies of webpages via google’s cache.
Classification of web documents:
Fuzzy set theory (Zadeh 1965) is the representation of classes whose boundaries are not
sharp and is therefore ideal for the ambiguous nature of keyword semantics that forms the
described problem with current search engines. Fuzzy logic uses values between the
interval [0,1]. In a fuzzy set, transition between membership and non membership is a
degree of truth so that the result of applying fuzzy set theory in document clustering
would be a degree of truth that the document belongs to a particular set.
The result of the semantic distance measure is a value that classes a pair of documents as
either…
Summary:
Given a search phrase of one or several words, the application presents a list of clustered
results of similar webpages ordered by weighting of keywords. The application would
improve the result of the search query of existing search engines by
Description:
Dynamic Pattern Recognition (DPR) is one interesting research area in the Classification
Research Field (CRF). It addresses several issues in practical applications such as 2D/3D
animation modelling, pattern tracking, behaviour learning, decision making and etc.
The main difference, also difficulty, of DPR to Still Pattern Recognition (STR) is that the
patterns in DPR are time-varied. A dynamic pattern will change its shape, location,
colour or any possible attribute according to its own specific formula when time passes.
Therefore, the knowledge of a specific dynamic environment where the patterns exist is
of course one important fundamental that DPR researchers have to be familiar with. Since
the behaviours of dynamic patterns link with the time domain, several learning
mechanisms have been proposed to contribute to the solutions of DPR. For example,
Artificial Neural Network, Bayesian Network, Hidden Markov Model, Kalman Filter,
etc. In some modelling schemes, several efficient algorithms had been introduced to
improve system’s global performances. Fuzzy Theorems, Probability Theorems, Generic
Algorithms are perhaps the most popular methodologies in helping making decisions.
Objective:
To study the efficiency of novel adaptive eTS (evolving Takagi Sugeno Model) on
Channel Equalization and Time Series Prediction
The flow chart of eTS algorithm can be represented as:
The different steps of the algorithm are as follows:
Step1: Initialization of the rule base structure (antecedent part of the rule)
Step2: Reading the next data sample
Step3: Recursive calculation of potential of each new data sample
Step4: Recursive update of the potential of old centers taking into account thr influence
of new data sample.
Step5: Possible modification or upgrade of the rule base structure based on the potential
of the new data sample in comparison of the existing rule centers
Step6: Recursive calculation of the consequent parameters.
Step7: Prediction of output of next the step
The effect of the different SNR on the value of bits in error out of 800 is given below
90
80
70
60
B
E 50 SNR
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11
SN
Parameter Evolution 3
Parameter Evolution 4
The effect on radii on the number of rules on error measure can be represented in
the table:
1) eTS has been tested on one communication Problem and other time series benchmark
4) The value of radii plays an important role in controlling the number of rules
5) The computational efficiency of eTS model is based on non iterative and recursive
procedure which combines the Kalman filter with proper intialization and online
unsupervised clustering
Clustering of a segmented Image using Genetic Algorithm.
Presented by Kotipalli Shiva Yuvaraj
Supervised by Dr Plamen Angelov, Prof Costas Xydeas.
Introduction
The study in image compression has lead to many algorithms in order to achieve best
possible compression. Compression can be achieved by classification of elements
based on their properties. In this topic, regions of a segmented image are clustered
based on its shape properties using Genetic Algorithm. Regions of the segmented
image are clustered using directed random approximation, for many numbers of
generations to optimise clustering. For every generation, the fitness of the result is
calculated to produce new offsprings. These offspring chromosomes are then
analysed based on their shape properties to further improve the clustering and these
offspring chromosomes then become the parent chromosomes to produce new
generation chromosomes. It becomes more and more evident, for every generation
that the algorithm converges to improved clustering of the segmented regions.
Methodology
Every region of the segmented image are labelled, if the image is segmented with 500
regions, a chromosome is build with 500 variables, each variable indicating to a
respective region.
Chromosome 1:
Region Number 1 2 3 4 5 6 7 8 …………………………… 499 500
Cluster Number 3 5 12 1 5 5 2 3 …………………………….. 3 5
Figure 1
A diagrammatic representation idea of a chromosome.
The regions are randomly allocated a cluster number by Uniform distribution
function. With respect to this the regions are clustered. 50 such chromosomes are
generated, the number of cluster are predefined. The aim is to cluster the regions
based on their shape properties, shape properties are defined by area, perimeter and
roundness of a region. So regions with similar shape properties are to be clustered
together.
Fitness are calculated for every chromosome based on a Objective function, Objective
function can used here is
OBJ = (1+ Average Distance between cluster)/ (1+Average Distance within cluster)
Considering the shape properties as three coordinates in space, regions place in the
space with respect clusters, if the regions in the cluster are to be similar they are to be
with minimal distance and if the region in different clusters need to be dissimilar then
the regions are need to with maximum distance.
So based on this objective function, every chromosome would give a fitness value, the
chromosome with the higher fitness are said to healthier in this case. So the healthy
set of chromosome are considered for generation of new generation of chromosomes
by crossover and mutation and next set of 50 chromosomes are created.
The same is continued for many generations and the chromosome with the highest
fitness is considered for the best clustering.