Sunteți pe pagina 1din 7

Even the most enthusiastic home computer owners have little idea

of the link their machines represent in the historical chain of


computer technology. It is a chain that runs from the ancient abacus
and Charles Babbage's Analytical Engine of the nineteenth century,
through the Apples and Commodores of the present, all the way to
the awe-inspiring fifth-generation computers of the future.
Consider this: the "home computers" of the 1990s will surely have
processing capabilities equivalent to those of the Cray-2, today's
most powerful computer.
What kind of computing power are we talking about? As far as
sheer processing speed is concerned, the present situation looks like
this: A typical home computer, with microprocessor switches turning
on and off at two to four million cycles per second and running a
program written in a fast computer language like C or FORTH, can
perform a few thousand arithmetic operations per second. The Cray1, which costs about $10 million, performs between 160 and 200
million arithmetic operations per second. The Cray-2 handles over a
billion arithmetic operations per second. To put it another way, a
program that takes twelve minutes to run on an Apple will be
executed on a Cray-1 in about three one-hundredths (0.03) of a
second, and the same program will run on a Cray-2 in about sixthousandths (0.006) of a second.
Indeed, microcomputers have some catching up to do. But wait.
The Apple Lisa and Macintosh, the Sage II and the Fortune 32:16 are
microcomputers that all have a processor known as the Motorola
68000, which boosts the computational power of these $3,000 to
$10,000 machines up to that of a superminicomputer costing
hundreds of thousands of dollars. The next generation of
microprocessors will be even faster.
Computer owners may well wonder what they're going to do with
all this processing power. The answer: artificial intelligence. A
sophisticated AI program would eliminate the need for users to write
programs, since they could communicate their orders to the
computer via ordinary English. Such a program, however, must do a
large number of symbolic calculations on a huge amount of data or

"real-world knowledge." AI is the ultimate programming challenge,


both for the programmer in terms of design and for the computer in
terms of execution time.

Structured Intelligence
This brings us to the Japanese Fifth Generation computer project.
The Japanese have been working feverishly on a billion-dollar
project, with a target date of 1989, to design and build a computer
that is not only a hundred times faster than a Cray but contains AI
software as well. This software would be capable of simulating
experts in fields like medicine or geology, playing games like chess
or Go at a grandmaster level, analyzing documents for factual errors
as well as grammatical and spelling errors, and translating
documents from one language into another. It all sounds great, but
the Japanese are making a few blunders along the way. To
understand how, we should take a look at programming languages
in general and their relationship to AI.
Let's start by saying that computers are "universal Turing
machines," which is a way of saying that computers are universal
calculators. Any procedure (or algorithm, as it's called) that can be
conceived of can be calculated or performed by a computer. If you
believe that the human mind arises from the physical workings of
the brain, then, since a computer can theoretically simulate
anything in the physical world, you have automatically declared that
a computer can simulate human thinking processes. The act of
writing a program in the computer's own language (little 1s and 0s)
is quite time-consuming, so high-level languages were developed
that enable the programmer to instruct the machine by typing in
commands like PRINT 2 + 2 instead of 10011101, 00000010,
01010011, 00000010 or whatever. Although there are many
different programming languages designed with special attributes
for special jobs (FORTRAN and APL for science/engineering, COBOL
for business, BASIC for beginning programmers), any algorithm can
be written in any language. The list processing language (LISP)
developed by McCarthy in 1958 is considered by everyone to be the

language for AI research, and yet, if the the need arose, we could
look at a LISP program, figure out what the algorithm or procedure
is, then rewrite the program in a different language such as BASIC or
even COBOL. It would be a programmer's nightmare, of course, and
the new program would be much larger in a more "inefficient"
language, but it could be done.

Program Limits
We do not even need the full capabilities of a computer language
to express any algorithm in a program. This idea had its origin in the
Structure Theorem first presented in a classic mathematical paper
by C. Bohm and G. Jacopini with the ominous title Flow Diagrams,
Turing Machines, and Languages with Only Two Formation Rules.
This paper introduced not a new computer language, but a style of
programming called "structured programming" or, more technically,
"programming by stepwise refinement," that could be used with any
program.
To put it simply, Bohm and Jacopini discovered that all computer
languages, large and small, come equipped with the following basic
features:
1) Sequences of two or more operations (add A to B, then divide it
by C, then print the result).
2) Decisions or choices (IF A THEN B ELSE C).
3) Repetitions of an operation until a certain condition is true. One
of these is the Do-While loop (keep adding 1 to X While X is less than
10). The other is the Do-Until loop (keep adding 1 to X Until X equals
10).
The Structure Theorem mathematically proves that the
expression of any algorithm in any language (i.e., any possible
program, including one simulating human intelligence) can be
written using only combinations of the three basic programming
rules mentioned above!
At first glance, it looks as if tremendous restrictions are placed on
the programmer, and yet by employing this "structured
programming" method one actually shortens the time it takes to

write, "debug" or modify any given program. This is because


structured programming forces us to break a single complicated
problem into several simple subproblems, then breaking these in
turn into several more simple subsubproblems, until the
programmer has reduced the original, highly complex problem into a
large number of interlocking, very simple problems that are easily
coded as a program. This technique is known as reductionism,
decomposition or top-down design.

The Smart Set


What everyone seems to have forgotten is that artificial intelligence
at the "grass roots" level is just another programming problem-like
listing recipes or keeping track of one's stamp collection, only
several orders of magnitude greater. Of course, no one in the AI field
would dare suggest this in public. It all sounds too easy, as if writing
the ultimate, ultra-intelligent program were more a matter of
tenacity than of divinely inspired programming wizardry. And yet no
strange languages or high priests of programming are necessary.
The only special requirement is one of hardware, namely, a
computer with immense storage capacity and a processor that can
perform billions of calculations per second.
One final programming thought: the length of a program does not
depend on the complexity of the algorithm to be executed; rather, it
depends on the size of the computer language's vocabulary. Some
languages (e.g., LISP and FORTH) have an advantage over others in
that they are "extensible," enabling the programmer to add new
words with corresponding new functions. A whole program could be
as short as an ordinary English sentence if the language has been
immensely extended (presumably with structured techniques) to
include the English vocabulary, rules of grammar and semantics.
So, although any algorithm can be represented in any language,
extensible languages are better to work with and produce much
shorter, easier-to-read programs than do BASIC and COBOL, for
example. The Japanese, however, have ignored LISP and chosen a
rather strange language called PROLOG (PROgrammable LOGic) to

run on their Fifth Generation computer. PROLOG forces the


programmer to use pure logic, which means the computer must take
the facts it knows about and calculate all their possible logical
relationships, or inferences. If the computer knows a lot of facts, this
process can lead to an unfortunate situation called a "combinatorial
explosion" in which calculations take almost forever to complete. So,
unless they radically change the language they are using, the
Japanese may find that their much touted monster computer won't
work at all. In fact, a number of AI problems are so tremendously
complex (among them, playing a perfect game of chess) that it
would take any computer many centuries to solve them.

Multi-Mentation
One answer to the problem of complex programming is to build the
computer with more than one processor, then break up the program
into pieces and assign each piece to a separate processor. With
many processors working on a problem simultaneously, or "in
parallel," the program in theory executes much faster.
In fact, fifty research projects in the United States are working on
"parallel processing" or, as it is also called, "distributed array
processing." These include:
Tom McWilliams' 16-processor S-1 computer at the Lawrence
Livermore National Laboratory, running at about two billion
arithmetic operations per second.
The Denelcor Company's mysterious HEP-2 computer, to be ready
in 1986, capable of twelve billion operations per second.
Salvatore Stolfo's 1,023-processor machine at Columbia University.
David Elliot Shaw's computer, also at Columbia, being developed
for the Defense Advanced Research Projects Agency (DARPA) and
projected to have 256,000 processors by 1987, a million by 1990.
Another solution is that it might be possible one day to build a
sixth-generation computer with a processor whose signals travel
faster than the speed of light. This would improve the processing
speed considerably, to say the least! Faster-than-light signals would
seem an impossibility, but there are three ways we might achieve

them: tachyons, the Einstein-Podolsky-Rosen (EPR) effect and the


"advanced potentials" solution to the moving charge equations
derived from Maxwell's electromagnetic theory. Whew! I'd love to try
to explain it further, but I'd need another 8,600 words, the length of
my original paper on the subject. Besides, it's not time yet to run
over to ComputerLand and place your order: sixth-generation
computers probably won't appear for decades.
In contrast to developing advanced hardware, creating the
artificial intelligence software to go with it seems to be a wide-open
field. Who will be the first to write the ultimate AI program? With the
appearance of cheap supermicrocomputers over the next ten years,
it could just as well be a fifteen-year-old in Montana as a thousand
software engineers working together in Japan. Who knows? Perhaps
you'll give it a try yourself ...

In this period, computer technology achieved more superiority and parallel


processing, which was until limited to vector processing and pipelining, where
hundreds of processors could all work on various parts of a single program. There
were introduction of systems like the Sequent Balance 8000, which connected up to
twenty processors to one shared memory module.

This machine was as competent as the DEC VAX-780 in the context that it had a
general purpose UNIX system and each processor worked on a different user's job.
On the other hand, INTEL IPSC-I or Hypercube, as it was called, connected each
processor to its own memory and used a network interface to connect the
processors. With the concept of distributed network coming in, memory posed no
further problem and the largest IPSC-I was built with 128 processors. Towards the
end of the fifth generation, another parallel processing was introduced in the
devices, which were called Data parallel or SIMD. In this system, all the processors
operate under the instruction of a single control unit.

In this generation semiconductor memories became the standard were pursued


vigorously. Other developments were the increasing use of single user workstations
and widespread use of computer networks. Both wide area network (WAN) and local
area network (LAN) developed at an incredible pace and led to a distributed
computing environment. RISC technology i.e. a particular technique for the internal

organization of CPU and the plunging cost of RAM ushered in huge gains in
computational power of comparatively cheaper servers and workstations. This
generation also witnessed a sharp increase in both quantitative and qualitative
aspects of scientific visualization.

S-ar putea să vă placă și