Documente Academic
Documente Profesional
Documente Cultură
Structured Intelligence
This brings us to the Japanese Fifth Generation computer project.
The Japanese have been working feverishly on a billion-dollar
project, with a target date of 1989, to design and build a computer
that is not only a hundred times faster than a Cray but contains AI
software as well. This software would be capable of simulating
experts in fields like medicine or geology, playing games like chess
or Go at a grandmaster level, analyzing documents for factual errors
as well as grammatical and spelling errors, and translating
documents from one language into another. It all sounds great, but
the Japanese are making a few blunders along the way. To
understand how, we should take a look at programming languages
in general and their relationship to AI.
Let's start by saying that computers are "universal Turing
machines," which is a way of saying that computers are universal
calculators. Any procedure (or algorithm, as it's called) that can be
conceived of can be calculated or performed by a computer. If you
believe that the human mind arises from the physical workings of
the brain, then, since a computer can theoretically simulate
anything in the physical world, you have automatically declared that
a computer can simulate human thinking processes. The act of
writing a program in the computer's own language (little 1s and 0s)
is quite time-consuming, so high-level languages were developed
that enable the programmer to instruct the machine by typing in
commands like PRINT 2 + 2 instead of 10011101, 00000010,
01010011, 00000010 or whatever. Although there are many
different programming languages designed with special attributes
for special jobs (FORTRAN and APL for science/engineering, COBOL
for business, BASIC for beginning programmers), any algorithm can
be written in any language. The list processing language (LISP)
developed by McCarthy in 1958 is considered by everyone to be the
language for AI research, and yet, if the the need arose, we could
look at a LISP program, figure out what the algorithm or procedure
is, then rewrite the program in a different language such as BASIC or
even COBOL. It would be a programmer's nightmare, of course, and
the new program would be much larger in a more "inefficient"
language, but it could be done.
Program Limits
We do not even need the full capabilities of a computer language
to express any algorithm in a program. This idea had its origin in the
Structure Theorem first presented in a classic mathematical paper
by C. Bohm and G. Jacopini with the ominous title Flow Diagrams,
Turing Machines, and Languages with Only Two Formation Rules.
This paper introduced not a new computer language, but a style of
programming called "structured programming" or, more technically,
"programming by stepwise refinement," that could be used with any
program.
To put it simply, Bohm and Jacopini discovered that all computer
languages, large and small, come equipped with the following basic
features:
1) Sequences of two or more operations (add A to B, then divide it
by C, then print the result).
2) Decisions or choices (IF A THEN B ELSE C).
3) Repetitions of an operation until a certain condition is true. One
of these is the Do-While loop (keep adding 1 to X While X is less than
10). The other is the Do-Until loop (keep adding 1 to X Until X equals
10).
The Structure Theorem mathematically proves that the
expression of any algorithm in any language (i.e., any possible
program, including one simulating human intelligence) can be
written using only combinations of the three basic programming
rules mentioned above!
At first glance, it looks as if tremendous restrictions are placed on
the programmer, and yet by employing this "structured
programming" method one actually shortens the time it takes to
Multi-Mentation
One answer to the problem of complex programming is to build the
computer with more than one processor, then break up the program
into pieces and assign each piece to a separate processor. With
many processors working on a problem simultaneously, or "in
parallel," the program in theory executes much faster.
In fact, fifty research projects in the United States are working on
"parallel processing" or, as it is also called, "distributed array
processing." These include:
Tom McWilliams' 16-processor S-1 computer at the Lawrence
Livermore National Laboratory, running at about two billion
arithmetic operations per second.
The Denelcor Company's mysterious HEP-2 computer, to be ready
in 1986, capable of twelve billion operations per second.
Salvatore Stolfo's 1,023-processor machine at Columbia University.
David Elliot Shaw's computer, also at Columbia, being developed
for the Defense Advanced Research Projects Agency (DARPA) and
projected to have 256,000 processors by 1987, a million by 1990.
Another solution is that it might be possible one day to build a
sixth-generation computer with a processor whose signals travel
faster than the speed of light. This would improve the processing
speed considerably, to say the least! Faster-than-light signals would
seem an impossibility, but there are three ways we might achieve
This machine was as competent as the DEC VAX-780 in the context that it had a
general purpose UNIX system and each processor worked on a different user's job.
On the other hand, INTEL IPSC-I or Hypercube, as it was called, connected each
processor to its own memory and used a network interface to connect the
processors. With the concept of distributed network coming in, memory posed no
further problem and the largest IPSC-I was built with 128 processors. Towards the
end of the fifth generation, another parallel processing was introduced in the
devices, which were called Data parallel or SIMD. In this system, all the processors
operate under the instruction of a single control unit.
organization of CPU and the plunging cost of RAM ushered in huge gains in
computational power of comparatively cheaper servers and workstations. This
generation also witnessed a sharp increase in both quantitative and qualitative
aspects of scientific visualization.