Sunteți pe pagina 1din 8

DNA computing

DNA-Deoxyribonucleic acid, or DNA, is a nucleic acid


molecule that contains the genetic instructions used in the
development and functioning of all known living organisms.
The main role of DNA is the long-term storage of information
and it is often compared to a set of blueprints.

Computing-The term computing is synonymous with


counting and calculating. Originally, people that performed
these functions were known as computers. Today it refers to
a science and technology that deals with the computation
and the manipulation of symbols.

DNA computing is a form of computing which uses DNA


and biochemistry and molecular biology, instead of the
traditional silicon-based computer technologies. DNA
computing, or, more generally, molecular computing, is an
exciting fast developing interdisciplinary area.

The founder of DNA (deoxyribonucleic acid)


computing, one of the most revolutionary discipline of
present computer science is an American mathematician
Leopard Adleman. In 1994, he demonstrated the solution
of the HPP (Hamiltonian Path Problem), that is, to solve the
HPP for a given directed graph, with the aid of DNA strands.

Hamiltonian Path Problem (HPP)

This problem is defined in this way: Let G be an oriented


graph with a denoted initial node Vin and a final node Vout.
The path from the node Vin to the node Vout is called
Hamiltonian one if and only if it includes every node of the
graph G only once. Generally, the Hamiltonian Path Problem
is formulated as a decision whether or not the given oriented
graph contains the Hamiltonian path.
The Hamiltonian Path Problem is NP-complete, which means
that the efficient solution of this problem, achievable in
polynomial time, probably does not exist and that all general
solutions of the problem lead to a massive search of the
state space. The number of steps needed to solve the
problem increases exponentially according to the size of the
graph.

Oriented graph G

For instance, let the initial node Vin = 0 and the final node
Vout = 6 be in the graph (Fig. 2.1). The Hamiltonian path is
created by the edges (0,1) (1,2) (2,3) (3,4) (4,5) and (5,6).

Generally, the HPP is known as a salesman problem. The


salesman leaves his town (the initial node) and visits every
existing town (node) just once. However, he may move
along denoted paths only (along the edges of the graph).
The destination of his journey is in a town (the final node)
designated in advance. The problem is to find out if the
salesman can visit all the towns and if so, which paths he
has to move along.

Example of DNA computer

MAYA-II (Molecular Array of YES and AND logic gates) is a


DNA computer, developed by scientists at Columbia
University and the University of New Mexico.
Replacing the normally silicon-based circuits, this chip has
DNA strands to form the circuit. It is said that the speed that
can attain such DNA-circuited computer chips will not rival
with silicon-based ones, they will be of use in blood samples
and in the body and might part-take in single cell signaling.

Application of Neural Networks


Neural networks are being used:

in investment analysis:
to attempt to predict the movement of stocks
currencies etc., from previous data. There, they are
replacing earlier simpler linear models.
in signature analysis:
as a mechanism for comparing signatures made (e.g. in
a bank) with those stored. This is one of the first large-
scale applications of neural networks in the USA, and is
also one of the first to use a neural network chip.
in process control:
there are clearly applications to be made here: most
processes cannot be determined as computable
algorithms. Newcastle University Chemical Engineering
Department is working with industrial partners (such as
Zeneca and BP) in this area.
in monitoring:
networks have been used to monitor

• the state of aircraft engines. By monitoring vibration


levels and sound, early warning of engine problems can
be given.
• British Rail have also been testing a similar application
monitoring diesel engines.

in marketing:
networks have been used to improve marketing
mailshots. One technique is to run a test mailshot, and
look at the pattern of returns from this. The idea is to
find a predictive mapping from the data known about
the clients to how they have responded. This mapping
is then used to direct further mailshots.

Where are neural networks going?


A great deal of research is going on in neural networks
worldwide.

This ranges from basic research into new and more efficient
learning algorithms, to networks which can respond to
temporally varying patterns (both ongoing at Stirling), to
techniques for implementing neural networks directly in
silicon. Already one chip commercially available exists, but it
does not include adaptation. Edinburgh University have
implemented a neural network chip, and are working on the
learning problem.

Production of a learning chip would allow the application of


this technology to a whole range of problems where the
price of a PC and software cannot be justified.

There is particular interest in sensory and sensing


applications: nets which learn to interpret real-world sensors
and learn about their environment.

New Application areas:


Pen PC's
PC's where one can write on a tablet, and the writing
will be recognised and translated into (ASCII) text.
Speech and Vision recognition systems
Not new, but Neural Networks are becoming
increasingly part of such systems. They are used as a
system component, in conjunction with traditional
computers.

By-GIRISH MAHADEVAN
24scs131, CSE A
Pattern recognition

Machine learning

As a broad subfield of artificial intelligence, machine learning


is concerned with the design and development of algorithms
and techniques that allow computers to "learn". At a general
level, there are two types of learning: inductive, and
deductive. Inductive machine learning methods extract rules
and patterns out of massive data sets.

The major focus of machine learning research is to extract


information from data automatically, by computational and
statistical methods. Hence, machine learning is closely
related to data mining and statistics but also theoretical
computer science.

Machine learning has a wide spectrum of applications


including natural language processing, syntactic pattern
recognition, search engines, medical diagnosis,
bioinformatics and cheminformatics, detecting credit card
fraud, stock market analysis, classifying DNA sequences,
speech and handwriting recognition, object recognition in
computer vision, game playing and robot locomotion.

Pattern recognition is a sub-topic of machine learning. It can


be defined as

"the act of taking in raw data and taking an action


based on the category of the data".

Most research in pattern recognition is about methods for


supervised learning and unsupervised learning.

Pattern recognition aims to classify data (patterns) based on


either a priori knowledge or on statistical information
extracted from the patterns. The patterns to be classified are
usually groups of measurements or observations, defining
points in an appropriate multidimensional space. This in
contrast to pattern matching, where the pattern is rigidly
specified.

A complete pattern recognition system consists of a sensor


that gathers the observations to be classified or described; a
feature extraction mechanism that computes numeric or
symbolic information from the observations; and a
classification or description scheme that does the actual job
of classifying or describing observations, relying on the
extracted features.

The classification or description scheme is usually based on


the availability of a set of patterns that have already been
classified or described. This set of patterns is termed the
training set and the resulting learning strategy is
characterized as supervised learning. Learning can also be
unsupervised, in the sense that the system is not given an a
priori labeling of patterns, instead it establishes the classes
itself based on the statistical regularities of the patterns.

The classification or description scheme usually uses one of


the following approaches: statistical (or decision theoretic),
syntactic (or structural). Statistical pattern recognition is
based on statistical characterisations of patterns, assuming
that the patterns are generated by a probabilistic system.
Syntatical (or structural) pattern recognition is based on the
structural interrelationships of features. A wide range of
algorithms can be applied for pattern recognition, from very
simple Bayesian classifiers to much more powerful neural
networks.

Pattern recognition has long been studied in relation to


many different (and mainly unrelated) applications, such as

• classifying galaxies by shape


• identifying fingerprints
• highlighting potential tumours on a mammogram
• handwriting recognition

Human expertise in these and many similar problems is


being supplemented by computer-based procedures,
especially neural networks. Pattern recognition is extremely
widely used, often under the names of `classification',
`diagnosis' or `learning from examples'. The methods are
often very successful, and this book explains why.

It is an in-depth study of methods for pattern recognition


drawn from engineering, statistics, machine learning and
neural networks. All the modern branches of the subject are
covered, together with case studies of applications. The
relevant parts of statistical decision theory and
computational learning theory are included, as well as
methods such as feed-forward neural networks (multi-layer
perceptrons), radial basis functions, learning vector
quantization and Kohonen's self-organizing maps.

Pattern Recognition Research Area

Area Description
Pattern recognition is the research area that studies the
operation and design of systems that recognize patterns in
data. It encloses subdisciplines like discriminant analysis,
feature extraction, error estimation, cluster analysis
(together sometimes called statistical pattern recognition),
grammatical inference and parsing (sometimes called
syntactical pattern recognition). Important application areas
are image analysis, character recognition, speech analysis,
man and machine diagnostics, person identification and
industrial inspection.
Related Areas
In the following areas closely related systems are studied or
similar tools are developed.

• Artificial Intelligence (expert systems and machine


learning)
• Neural Networks
• Vision
• Cognitive Sciences and Biological Perception
• Mathematical Statistics (hypothesis testing and
parameter estimation)
• Nonlinear Optimization
• Exploratory Data Analysis

By- GIRISH MAHADEVAN


24SCS131,CSE A

S-ar putea să vă placă și