Sunteți pe pagina 1din 35

Biologically Inspired

Computing: Introduction

This is lecture one of


`Biologically Inspired Computing’
Contents:
Course structure, Motivation for BIC , BIC vs
Classical computing, overview of BIC techniques
General and up-to-date
information about this module
Go to my home page:

www.macs.hw.ac.uk/~dwcorne/
Find my Teaching Materials page, and go on
from there.
Course Delivery
Week beginning Monday 3:15 Wednesday Thursday 4:15 EVENTS
EM306 11:15 EM303 EM307
12th Jan DC overview of DC EAs I NT Neural
module Computation
19th Jan DC Evolutionary DC Evolutionary NT Neural DC hands out
DC
Computation Computation Computation coursework 1 worth
25% of module
26th Jan DC Swarm DC Swarm NT Neural
Intelligence Intelligence Computation
2nd Feb DC Kohonen DC Cellular NT Neural DC hands out
Networks Automata Computation coursework 2 worth
10% of module
9th Feb PF PF PF (PF c/w TBA)
16th Feb PF PF PF (PF c/w TBA) NT
23rd Feb PF PF PF (PF c/w TBA)
2nd Mar PF (PF c/w TBA)
9th Mar NT PF NT Friday hand-in for
coursework 2

16th Mar NT PF NT Friday hand-in for


coursework 1
PF
23rd Mar NT NT NT Revision
Lecture
30th Mar DC Revision PF Revision
Lecture Lecture
Examinable materials (DC)
• All the slides (all online)
• A few additional papers and notes provided online

Exam + Coursework (whole module)

Exam: 50% DC c/w 35% PF c/w 15%

c/w 1 Programming/Expts assignment 25%


c/w 2 Question Sheet 10%
• DC = David Corne, will generally lecture about bio-
inspired methods for optimisation, with a focus on
evolutionary computation (aka genetic algorithms) –
broadly this is about how certain aspects of nature
(evolution, swarm behaviour) lead to very effective
optimisation and design methods.

• PF = Pier Frisco, will generally lecture about molecular


computing – how computation is done within biological
cells – and how that can be exploited, and how it inspires
new ideas in computer science.

• NT = Nick Taylor, will generally lecture about neural


computation – this is perhaps the most widely exploited
bio-inspired technique, which underpins how we can build
machines that learn from examples.
About the DC Coursework
Programming assignment: Worth 25% of the module; released
in week 2.

Quiz Questions: There will be 10 questions, each worth 1 mark.


The questions are at the end of my ppt lecture material, and sometimes
at the end of additional (ppt) reading material. Hence they are released
gradually

About the Exam


Answer THREE Questions from FOUR
You might expect that DC/NT/PF contribute roughly 1/3 each to
the exam.
Difference between BSc and MSc

The (DC) MSc coursework will be based on


the BSc coursework, but a bit harder with
more to do.
This Lecture
Lecture 1:

Classical computation vs biological computation

Motivation for biologically inspired computation

Overview of some biologically inspired things


What to take from this lecture:
What `classical computing’ is, and what kinds of tasks it
is naturally suited for.
What classical computing is not good at.
An appreciation of how computation and problem solving
are manifest in biological systems.
Appreciation of the fact that many examples of
computations done by biological systems are not yet
matched by what we can do with computers.
An understanding of the motivation (consequent on the
above) for studying how computation is done in nature.
A first basic knowledge of the main currently and
successfully used BIC methods
Classical Computation vs Bio-
Inspired Computation
• the fridge story
• How do you tell the difference between dog and cat?
• How do you tell the difference between male and female face?
• How do you design a perfect flying machine?
• How would we design the software for a robot that could make
a cup of tea in your kitchen?
• What happens if you:
– Cut off a salamander’s tail?
– Cut off a section of a CPU?
Classical Computation vs Bio-
Inspired Computation
Classical computing is good at:
Number-crunching
Thought-support (glorified pen-and-paper)
Rule-based reasoning
Constant repetition of well-defined actions.

Classical computing is bad at:


Pattern recognition
Robustness to damage
Dealing with vague and incomplete information;
Adapting and improving based on experience
Why don’t we have software that
can do the following things well?
• Automatically locate a small outburst of
violent behaviour in a football crowd
• Classify a plant species from a photograph
of a leaf.
• Make a cup of tea?
Pattern Recognition and
Optimisation
These two things tend to come up a lot when we
think of what we would like to be able to do
with software, but usually can’t do.
But these are things that seems to be done very
well indeed in Biology.
So it seems like a good idea to study how these
things are done in biology – i.e. (usually) how
computation is done by biological machines
Basic notes on pattern recognition
and optimisation
Pattern recognition is often called classification

Formally, a classification problem is like this:

We have a set of things: S (e.g. images, videos, smells, vectors, …)


We have n possible classes, c1, c2, …, cn, and we know that everything
in S should be labelled with precisely one of these classes.

In computational terms, the problem is:


Can we design a computational process that takes a thing s (from S) as
its input, and always outputs the correct class label for s?
Classification: examples
What S might be What the classes might be
1 Images of peoples’ faces male, female
Smells (e.g. molecular illicit_drugs, ok or
2 spectra:
fresh meat, good meat, rotten meat
3 Utterances of “hello” (e.g. child, man, woman or
in wav files) authorised-person, unauthorised
4 Renditions of my signature genuine, fake
5 Images of artworks beautiful, good, reasonable, awful
Patient data – results of malignant, benign
6
various blood tests
7 Applications for loans good risk, medium risk, bad risk
8 Aircraft engine diagnostics safe, needs-maintenance, ground
9 Ground-penetrating radar image land-mine-here, safe
Some notes about those examples
The idea of these examples is to:
1. Remind you that pattern recognition is something you do
easily, and all the time, and you (probably) do it much better
than we can do with classical computation. (e.g. 1, 2, 3, 5)
2. Remind (or inform) you that such complex pattern recognition
problems are not yet done well by software (e.g. 1, 2, 3, 5)
3. Indicate that there are some very important problems that we
would like to solve with software (9, 8, 6, 2, 7 are obvious, but
of course we would like to do all nine and much more ), which
are classification problems, and note that these are just as hard
as examples 1, 2, 3, 5.
4. So, hopefully we can learn how brains do 1, 2, 5 etc …, so that
we can build machines that find land mines, tell fake from
genuine signatures, diagnose disease, and so on …
How brains seem to do pattern
recognition – more in lectures 17—18

The business end of this is made of lots of these joined in networks like this

Much of our own “computations” are performed in/by this network


The key idea in brain-inspired
computing
The brain is a complex
tangle of neurons, connected
by synapses
The key idea in brain-inspired
computing
When neurons are active, they
send signals to others.
The key idea in brain-inspired
computing
A neuron with lots of `strong’
active inputs will become active.
The key idea in brain-inspired
computing
And, when connected neurons
are active at the same time, the
link between them gets stronger
The key idea in brain-inspired
computing
So, suppose these neurons happen
to be active when you see a fluffy
animal with big eyes, small ears and
pointed face
The key idea in brain-inspired
computing
So, suppose these neurons happen
to be active when you see a fluffy
animal with big eyes, small ears and
pointed face

… and suppose your mother then


says “Cat”, which excites this
additional neuron.
The key idea in brain-inspired
computing
Links will then strengthen between
the active neurons

So when you see a similar animal


again, this neuron will probably
Automatically be activated, helping
you classify it.

A slightly different group of neuron


will respond to dogs, and sometime
both the “cat” and “dog” group
will be active, but one will be more
active than the other …
Notice this in particular

What happens if we damage a single


neuron (remember, in reality there will
be thousands involved in simple
classification-style computations)?

Compare this with damaging a line of


code.

In classical computing we provide


rules; but biology seems to learn
gradually from examples.
Basic Notes on Optimisation
We have 3 items as follows: (item 1: 20kg; item2: 75kg; item 3: 60kg)
Suppose we want to find the subset of items with total weight closest to 100kg.

Which is
000 110
the best?
101
100
111 001
010 011

Well done, you just searched the space of possible subsets. You
also found the optimal one. If the above set of subsets is called S,
and the subsets themselves are s1, s2, s3, etc …, you just optimised the
function “closest_to_100kg(s)”; i.e. you found the s which minimises the
function |(weight—100)| .
Search and Optimisation
In general, optimisation means that you are trying to
find the best solution you can (usually in a short
time) to a given problem.

We always have a set S of all possible S


solutions
s1 s2 s3 …
S may be small (as just seen)

S may be very, very, very, very large


(e.g all possible timetables for a 500-exam/3-week diet)
… in fact something like 1030 is typical for real problems.
S may be infinitely large – e.g. all real numbers.
How Biology Optimises
We wish to design something – we want the best possible (or,
at least a very good) design.

The set S is the set of all possible designs. It is always much too
large to search through this set one by one, however we want to
find good examples in S.

In nature, this problem seems to be solved wonderfully well,


again and again and again, by evolution

Nature has designed millions of extremely complex machines, each


almost ideal for their tasks (assuming an environment that doesn’t
go haywire), using evolution as the only mechanism.

Clearly, this is worth trying for solving problems in science and industry.
Quick overview of BIC
techniques we will learn about
Evolutionary algorithms:
Use nature’s evolution mechanism to evolve solutions
to all kinds of problems. E.g. to find a very
aerodynamic wing design, we essentially simulate
evolution of a population of wing designs. Good
designs stay in the population and breed to, poor
designs die out. EAs are highly successful and come
in many variants. There is also a lot to learn to
understand how to apply them well to new problems.
We will do quite a lot on EAs. EAs are all about
optimisation, however classification is also an
optimisation problem, so EAs work there too …
A genetically optimized
three-dimensional truss with
improved frequency response.

An EA-optimized concert-hall
design, which improves on human
designs in terms of sound quality
averaged over all listening points.
Swarm Intelligence
How do swarms of birds, fish, etc … manage to
move so well as a unit? How do ants manage to find
the best sources of food in their environment.
Answers to these questions have led to some very
powerful new optimisation methods, that are
different to EAs. These include ant colony
optimisation, and particle swarm optimisation.
Also, only by studying how real swarms work are
we able to simulate realistic swarming behaviour
(e.g. as done in Jurassic Park, Finding Nemo, etc …)
Kohonen Networks
NT will teach you about neural computation, which
is largely about how we can teach machines to do
classification and pattern recognition – but there is a
more fundamental type of neural-inspired method,
which relates to making sense of the world around
us without being trained or taught: this is what a
Kohnonen network does
Cellular Automata
Cellular Automata (CA) are very simple
computational systems that produce very complex
behaviour, including `lifelike’ reproduction. CAs, as
we will see, are also very useful for
explaining/simulating biological pattern generation
and other behaviours
Neural Computing
Pattern recognition using neural networks is
the most widely used form of BIC in industry
and science. We will learn about the most
common and successful types of neural
network.

This is Stanley, winner


of the DARPA grand
Challenge – a great
example of bio-inspired
computing winning over
all other entries, which
were largely `classical’
How Biological Computers Compute

With these

Not with these

class 1 PF will tell you more … 34


Week 1 Self-Study & Quiz
Before we get into looking at Evolutionary Algorithms (as well as
other methods that do optimisation), we need to understand certain
things about optimisation, such as

• When we need clever methods to do it, and when we don’t


• What alternatives there are to EAs – no point designing
an EA for an optimisation problem if it can be solved far
more simply.

So some of the additional material and associated quiz questions


this week is about optimisation problems in general, and some
key pure computer-science things you need to know.

The next lecture will then introduce evolutionary algorithms.

S-ar putea să vă placă și