Sunteți pe pagina 1din 44

Programming Languages:

Past, Present, and Future


S i x t e e n P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s O u r Field
by Peter irott

From ENIAC to ENIAC-on-a-chip. From Hollerith


cards to massive hard drives. From hand-coded binary to extremely optimized compilers. As Dr. William Wulf says in this article, regarding the first 50
years of computer science technology, "It's been one
hell of a ride!"

Dr. Fran Allen - IBM Fellow, Technical Advisor


to the Research VP for Solutions

Dr. Jack Dennis - Professor Emeritus of Computer Science and Engineering, MIT, Chief Scientist, Acorn Networks, Inc.

Dr. Jeanne Ferrante - Professor of Computer


Science, Chair of the Computer Science and Engineering, Department at the University of California, San Diego

Dr. Adele Goldberg - Director, Neometron, Inc.

Dr. Ken Kennedy - Noah Harding Professor of


Computer Science at Rice University; Director of
the Center for Research on Parallel Computation,
NSF Science and Technology Center

Dr. Barbara Liskov - Ford Professor of Software


Science and Engineering, MIT

Dr. David B. MacQueen - Department Head,


Software Principles Research at Bell Laboratories, part of Lucent Technologies

Dr. Jean Sammet - Programming Language Consultant, retired from IBM

Dr. Ravi Sethi - Research VP for Computing and


Mathematical Sciences, Bell Laboratories, part
of Lucent Technologies

Dr. Burton J. Smith - Chairman and Chief Scientist, Tera Computer Company

Dr. Guy Steele, Jr. - Distinguished Engineer at


Sun Microsystems Laboratories

And Happy Anniversary!

Dr. Bjarne Stroustrup


AT&T Labs Research

The l cspondents:

Dr. Andrew Tanenbaum - Professor of Computer


Science, Department of Mathematics and Computer Science, Vrije Universiteit

Dr. Mark Wegman - IBM Watson Research


Center Research Staff Member, Manager of Object and Composition Technologies

To help celebrate ACM's 50th anniversary, we interviewed sixteen of the leading lights in programmil~g
languages, and asked them questions about the past,
present, and future of the field. Some of these experts
have been involved in the development of the programming languages field from the beginning. They
have all left their marks, and all have their personal
visions of where we've been and where we're going
next.
Regarding the questions we asked, Dr. A1 Aho had
this to say: "The answers you get to these questions
are going to be heavily biased by the people you talk
to. When I speak to people in the programming languages area, there is often a sharp dichotomy between
the researchers and academics, who have somewhat
different perspectives on programming languages.
Have you noticed this in your interviews? That if you
talk to the folks who have to produce working artifacts do they have radically different views from the
researchers or academics?"
Here is what sixteen computer scientists had to say.
Their visions do not always agree, but we will leave it
to you to decide how radically they vary. It is our
hope that, regardless of your area of interest or expertise, you will find these interviews revealing and
thought-provoking.

Dr. A1 V. Aho - Chairman of the Computer Science Department at Columbia University

ACM SIGPLAN Notices

14

Department Head,

V. 32(1) January 1997

Programming Languages, Past Present and Future

Dr. William Wulf - AT&T Professor of Engineering at the University of Virginia, President of
National Academy of Engineering

I was involved in computing as an undergraduate as


well, but not really thinking of it as a career; I suppose that didn't happen until around 1970, or so.

Goldberg:

How did,you first: become


interested in compu :in as,your

career choice?
Aho:

Probably when I was at graduate school at

Princeton. This was in the early Sixties when I was an


undergraduate. I really didn't know what I wanted to
do with my life so I adopted a strategy of trying to
stay in school as long as possible so I wouldn't have
to make a choice. So after graduating from the University of Toronto I went to Princeton, and entered
the computer science program, which was in the EE
department at the time, and through that I became
very interested in Computer Science. I've been involved in programming languages since 1963.

Allen:

I've been in the field since 1956, which


was prior to the emergence of computing as a science
or as an independent discipline. 1 became interested
when I took a course at the University of Michigan.
So, I' ve been in the field 40 years.
Denlli~:

I was attracted to the Whirlwind com-

puter at MIT by friends, other undergraduates at MIT.


As a graduate student I became interested in operations research, and implemented an efficient program
for solving the Transportation Problem, a special
form of linear programming, on the Whirlwind computer. I have been involved in computing since 1953.
Ferrarlte:

I was actually interested in math; I

did an undergraduate degree in math, and then went


to MIT for graduate school in math, and it was
probably in my second year that I began to get interested in computer science. What I wound up doing
was staying in the math program, like a lot of people
who have been in this business for awhile, but I
switched to an advisor in the computer science department. So I wound up doing a theoretical computer
science thesis. I got my Ph.D. in 1974, and taught for
a few years at Tufts, but then decided I really wanted
to move into a more practical area, and got a job at
IBM's T.J. Watson Research in 1978, working in the
compiler area with Fran Allen and the group there.
That's really when I feel I switched into computer
science. Before that I was pretty theoretical and very
close to math still.

15

was a math major in college. I


was going to be a math teacher, but decided not to do
that because I would have to take public speaking
class-and I didn't want to do that. While I was looking around for what I could do-something that would
be interesting-I got a job as a clerk in an IBM office
in Chicago. They had program instruction manuals
for unit record equipment, and I taught myself how to
wire the equipment, and I thought, "Oh, I can do
this." So I decided to go to graduate school, because I
didn't really learn computers as an undergraduate.
That was in 1966, so it's been a long time.
Kennedy:
When I was an undergraduate at
Rice, I went to work programming for Shell Oil Corporation during the summers. At Rice, there was no
computer science program at that time and, in fact, I
really didn't realize it was possible to pursue an academic career in computer science. Instead, I was
studying mathematics and went to graduate school at
New York University to pursue that career. My undergraduate adviser had encouraged me to work with
Jack Schwartz, but when I got to NYU I was informed that he had gone into computer science. When
asked whether I wanted to follow him, I thought,
"Well, I like computers, so maybe I will." That's how
I ended up as one of the first Ph.D.'s in computer
science at NYU. Although nay graduate work didn't
start until 1967, I began using computers in the midSixties at Rice and first went to work for Shell in
1965. As an undergraduate at Rice, my first exposure
to computers was an antiquated IBM 1620 that was
generally available for student use.

Liskov:

I became interested in computing as the

result of a job i got after I graduated from college in


1961.
[%4acQueen:

My undergraduate and graduate

work was in mathematics. My Ph.D. thesis was in


recursive function theory, which is a rather abstract
study of computation. After I had finished my Ph.D.,
I had a choice of whether to continue on in mathematics, or to switch over to studying computation in a
more concrete way and I decided I would prefer the
concrete approach to computation. And so I moved
into computer science. At that point I had an opportunity to join the AI department at the University of
Edinburgh, and that was a great opportunity to make

S i x t s e n F'romln~nt C o m p u t e r S c i e n t i s t s A s s e s s Our Field

the transition from mathematical logic to theory of


computation in computer science.
I've been involved in computing essentially since
1975, which was when I joined the AI department at
Edinburgh.
,~ammst:

I was a math major in college, and

really planned to teach mathematics, and for various


reasons, I never taught full time. And I saw a little bit
of work with punch cards during that time, which was
intriguing, but I never did anything with that. Then,
when I was at Sperry Gyroscope, working on analog
computers and doing mathematics, the Sperry engineers were building a digital computer to get some
experience in what a digital computer was and how to
build one. My immediate boss, who was an engineer,
came over to me one day and said, "How would you
like to be our programmer?" And I said, "What's a
programmer?" And he said, "I don't know, but I
know we need one!" So I looked into it a little bit. I
thought it might be fun. And indeed, it was fun!
I have been in this field since 1955.
,~thi:

As a freshman-a sixteen year old in In-

dia-I had the opportunity to take an introductory, outof-hours course on computing, and loved it. The professor ran this course in order to get lab instructors
for a course on computers he was teaching. So here
we were, freshman acting as lab assistants for students in their third year. I've been involved since
1963.

fimith:

I have been involved in computing since

1967, when I discovered that I enjoyed the kinds of


mathematics that are associated with computing.
,~te~lS:

I had thought of it as a hobby from the

time I was about ten or 12 years old. I had a science


fair project on computing, and so on. I was fortunate
enough to go to a school, Boston Latin School, that
had an IBM 1130, which was fairly unusual for the
time; this was around 1968. So by the time I entered
college I already had about four years of computing
experience under my belt. I still regarded it as sort of
an interesting hobby to dabble in. I proceeded to major in pure mathematics in college, and when the going got kind of difficult in the pure math, I switched
over to applied math, which at Harvard College included the computer science curriculum. And I suppose it was at about that time that I began to think of
it in terms of a career. The other influence I will
mention is that in 1971-I suppose I would have been
in 11th grade-I had done a science fair project on

16

computing, and one of the judges at the science fair


was a member of ACM and gave me the necessary
materials to apply to be a student member. And that's
how I first got hooked up with ACM.
I wrote my first FORTRAN program in 1968, so I
suppose it's been 28 years.

~troustrup:

I honestly don't know.

signed up for mathematics with computer science


when I entered university, without ever having seen a
computer. I think that the combination of science and
practicality that I perceived in computers attracted
me. Art for the sake of art never attracted me, and
neither did science for the sake of science.
Tan~n[TalAm:

When I was a freshman at

MIT, I accidentally discovered a


260. I wandered in and asked if
answer was yes. I was given the
manual (about 30 pages). I read it

PDP-1 in room 26I could use it. The


complete reference
and I was hooked.

Wegman:
The first time I looked at programming I was working for my uncle, and he had a program that someone had written for his company, and
he needed to have it flowcharted to figure out what it
was doing. But I think I was intellectually interested
long before that. I had read books on programming
and stuff.
The job with my uncle was when I was in
that would have been between 1967 and
I wrote my first program was probably
between my sophomore and junior year.
have been '69-ish.

college, and
1971. When
the summer
That would

W l A h e : An absolute fluke! In 1960 I was a physics


major at U of Illinois, and the only section of a complex analysis class that I wanted to take was closed. A
friend said he was going to take this computer class,
and would I like to do take it with him; and, with
some reluctance, I did, and I fell in love and never
turned back. The guy who taught that class was Lloyd
Fosdick, who's now at the University of Colorado. It
was just one of those transformational moments in a
person's life. I've been involved in computing since
1960; let's not do the arithmetic!

Programming Languages, Past Present and Future


about FORTRAN, which 1 absorbed in a weekend,
and I was off and running.

Whal was 1:he first; projrammind


language you ever used? On which
machine?

, : : ~ t l ' o u s t r u p : ALGOL60 on a Danish GIER


computer. After that, I tried out quite a few languages
and did serious work in assembler, microcode,
ALGOL68, BCPL, and SIMULA67.

Aho:
I learned IPL-V and SNOBOL4 as my first
programming languages on an IBM 790.

Ta

Allen:

Wegman:

SOAP. IBM 650.

I believe it was FORTRAN, for a

physics class. Which computer? An ancient one;


probably a PDP.
Gold~er'g:

The very first language I used was

MAD, the Michigan Algorithm Decoder, on a coupled 790-740,the big mainframe that the University of
Michigan had.

Kennedy:

I used F O R T R A N on the IBM 1620,

and it was a real struggle. Compilers were not very


reliable, and the machine produced output on
punched cards, which you had to print on an interpreting card reader. Using the machine was very
awkward, but I remember the experience as one of the
most exciting of my life.

Liskov:

F O R T R A N on the 790.

MacQueen:

That would have been PL/I on a

IBM 360.

D e n n i s : The assembly language of Whirlwind.

F::el"r'arlte:

!1 e n ~ a u !11: The PDP- 1 macro assembler.

F O R T R A N on an IBM 1620, in

the summer of 1964.

Wul-F::

First of all, my first machine was Iliac 1,

which not only had no programming languages, but


no assembler; we used what we called "sexadecimal"
at the time. The second machine was the IBM 650,
and it had an assembler called SOAP. I don't recall
the name of the first actual language I used; it had
been designed at the University of Wisconsin, and
was sort of BASIC-like in structure. My second language was FORTRAN; there was a translator called
F O R T R A N ' s IT; IT was the intermediate translator, a
sort of pseudo-high level language, and there was a
FORTRAN-to-IT translator.

What; w a s l he first; high-level


languaje you ever used? On which
machine?
Aho:

I would put SNOBOL4 as my first high-level

language.

,~a miner:
FORTRAN; I began teaching it the
year it came out. I was really only using it for educational purposes, but I remember very well; it was on
the 704.

Allen:

FORTRAN. IBM 704

The first programming language I ever


used in a job was FAP, the F O R T R A N Assembly
Program for the IBM 7094.

Denni.~: It's a hard question; Charles Adams put


together several interpreters that ran on Whirlwind
which provided libraries of floating-point operations
to Whirlwind users. But at the same time you could
have access to assembly level code, so they might not
be considered high level. Therefore, I would say
FORTRAN, on the IBM 704 at the MIT Computation
Center around 1956.

Steele:

I::el"r'ante:

,~ethi:

A variant of F O R T R A N on an IBM 1620.

,Smith:

FORTRAN, on an IBM 1130. I didn't

exactly pick it up on the street corner, but I did pick it


up in the school hallways. A buddy of mine had
learned it a few weeks before I did and showed me a
ten line F O R T R A N program, and I said, wow, this is
cool, how do I find out more! And one of my math
teachers loaned me a programmed learning tutorial

17

I really kind of consider my real


first programming language LISP, and that's probably
the MIT influence. I believe that at that point MIT
had PDP's also; DEC machines.
Go]tiber'g:
MAD.

The Michigan Algorithm Decoder;

Sixteen Prominent Computer Scientists Assess Our Field

I',Ki~nni~dy: F O R T R A N on the 1620. And then,

What is the most enj'oyab/e


experience,you have ever had on a
computer? How did programming
languages relate t;o this
experience?

later, I did more work on a more usable F O R T R A N


system on the IBM 7094.
Li~kov:

F O R T R A N on the 790.

Mac(~u~sn:

F O R T R A N on the IBM 1620.

,~amlllet:

Sethi:
mith:

F O R T R A N on the 704.

Aho:

F O R T R A N on the IBM 1620.


That was MAD, the Michigan Algorithm

Decoder, also on the IBM 7094.


,Steele:

F O R T R A N on an IBM 1130.

Scroustrup:

A L G O L 6 0 on GIER.

Tan6n~aum:

I guess it was M A D on the

IBM 7094. I took a computer course in my first year,


but the guy teaching it was totally incomprehensible
(unusual for MIT where the faculty take teaching
extremely seriously). The course was mostly about
linear algebra. The last lecture was given by a guest
lecturer, some new up-and-coming fellow named
John McCarthy and it was about some strange language called LISP that he had just invented. And it
wasn't the slightest bit useful for linear algebra. But it
seemed very elegant to me. Unfortunately, there was
no programming in it. My next year I took 6.251 in
which we got to program in M A D - t h e Michigan Algorithm Decoder, basically a variant of A L G O L 58.
We were allowed 4 runs on the great 7094. It was an
exciting time and a very good course. My TA was a
young grad student named Jerry Saltzer.

Wegman:

PL/I on an IBM 360.

Probably creating the scripting language


which has come to be known as AWK. I worked with
Brian Kernighan and Peter Weinberger at Bell Labs
putting together a little language that would allow
people to do standard data processing tasks, budgeting tasks, and editorial tasks without having to write
long programs; instead, they could use very short
A W K scripts. I wanted this for my own personal use,
and I was delighted that lots of other people found the
language useful as well.

Allen:

That's a tough one! I guess it was when I


worked on two very early supercomputers called
Stretch and Harvest. They were built in the midSixties. And that was an extraordinary machine, and
extraordinary piece of work we were doing. And because of the state of the art, we really were able to
invent and use many, many things that have become
standard now.
Harvest was a one of a kind machine that was being
built for the National Security Agency, so it wasn't
well-known at the time. Their work was code breaking, so we invented a language, we actually designed
a language which was very high level for describing
character string manipulations and analysis. In fact,
we invented some notions that really didn't show up
until much later, but we didn't write about them because of the nature of the work.

WuI't=: F O R T R A N . I ' m not sure you could call the

D~nni.~:

other language I mentioned "high-level." It was certainly "higher-level;" it had statement labels and
G O T O statements, and primitive arithmetic expressions, but it didn't have variables, as we know them

them out takes a while. (laughs) I suppose it was


when I wrote a simulator for an extension of the TX-0
instruction set. I was responsible for the TX-0 machine from 1959 to 1963 or thereabouts: This was an
experimental machine built by the MIT Lincoln
Laboratories. The reason I say it was the most enjoyable is that the coding was easy, the results were
good, and it was a useful program. The language I
used for that was the macro assembler program which
I wrote for TX-0.

now.

Ferrante:

I've had so many experiences, sorting

I suppose the primary experience is

when you run a program and see results coming out


the other end. Maybe more for me it's seeing code

18

P r o g r a m m i n g Languages, P a s t P r e s e n t and F u t u r e

come out the other end, since I ' m in the compiler


area. Actually seeing code transformations taking
place and generating good code and checking out its
performance, that sort of thing, is probably the most
enjoyable experience.
Goldbepg:
This assumes that I feel that way
about computers. (laughs) There are two things I end
up doing on computers. One of them is developing
rehearsal world environments for learning, and the
other one is basically doing graphic arts. Both of
them I find enjoyable. I don't know if I'd measure
one above the other.
I got involved with the Smalltalk group because my
primary interests are in educational technology, and
I ' m a strong advocate of simulation as a way to explore one's theories. And the Smalltalk group was
doing two things I care about: making computational
power affordable and accessible. Part of that accessibility is physical, the other part is virtual in the sense
of the kind of language in which you can express
simulation.
K.erlnedy:
I really enjoy building substantive
systems: large programming systems, large compilers,
things of that sort. At Rice, I have been deeply involved in the implementation of two very large systems--a programming environment for FORTRAN,
and an automatic vectorization and paratlelization
system. The autoparallelization system was written in
PL/I, and the environment in C and C++. Building
those systems was time-consuming but very enjoyable. It would not have been possible for us to build
systems like these without using a high level programming language.

Liskov:

I enjoy computing, but I wouldn't say


I've had a most enjoyable experience.
MacQu~en:
That's a tough one. I think the
most enjoyable experience is just developing working
software. The design and implementation and debugging of working programs is for me a great entertainment. A kind of game, and I find it very involving,
engrossing. I ' m a hacker at heart.
S~rlrlmet:
Oh, I guess my most enjoyable experience was when some of my early programs ran.
The first programs weren't written in a programming
language. Programming languages didn't exist; I was
writing in assembler.

,~sthi:
That's a tough one. I ' m led to various
application level things that I've used, that I ' v e been

19

using personal computers for. One of the things I


really enjoyed a lot was using Hypertalk on an Apple
Macintosh. The neat tiling about that was here was a
language, it was very verbose but it was a language
never the less, and it allowed me to write all kinds of
programs and deal not just with numbers, but with
making things happen on the screen as well.
How did programming languages relate to this experience? Hypertalk is a language, and it is an objectoriented language. The main thing was just learning
Hypertalk
Smith:
Actually, my most enjoyable experience
was getting the first program to run on the HEP machine. The program was written in FORTRAN; it was
a six degrees of freedom missile simulation program.
The machine was an experimental machine, and
hadn't had any programs run on it. It was pretty exciting.

S t e e l e : My mind reels .... It's hard to pick any


one that stands out. I'll pick one incident . . . . This
story points out the importance of computing societies and conferences and such. In 1969 the Spring
Joint Computer Conference was in Boston, and I, still
in the ninth grade, decided to attend, and mostly hung
out in the exhibit areas, and at that time the IBM
booth was showing off their APL language product,
and I just sort of hovered in the background watching
the IBM representatives demonstrate to real attendees, businessmen and so forth, about this newfangled programming language. And I stayed there
until the show closed down, and as they were trying
to shoo everyone else out of the exhibit hall, I walked
up to one of the IBM exhibitors and pointed to the
stack of paper that had accumulated behind the 2741
terminal, and said, "May I have that?" and she said,
"It's yours!" and I walked home with about an inch
and a half of paper under my arm, chock full of APL
programming examples, and spent the next week
studying them, and that's how I learned APL. The
next big step was learning assembly language, and
after that I encountered the LISP language, because I
had started hanging out at MIT. MIT had a very
strong high school studies program where high school
students could come on Saturday mornings. MIT students would teach courses, which was a good deal all
around: MIT students practicing teaching, and high
school students being exposed to advanced topics. I
took a course in group theory and I took several
courses in computing which gave me access to some
of the computers at MIT. Hanging around in that way
I began encountering people from the artificial intelligence lab at MIT, and LISP was very much in the

Sixteen Prominent C o m p u t e r

S c i e n t i s t s A6sess Our Field

air. And so I learned LISP and set out to implement a


LISP interpreter of my own for the IBM 1130, which
was my machine of choice, or my machine of availability, anyway, although I still have a sweet spot in
my heart for that machine. And there's nothing for
learning a programming language like trying to do an
implementation of it. And so I got fairly solidly
grounded in LISP, and that experience led to a part
time job at MIT maintaining the MacLISP system,
and that part time job eventually paid my way through
college.

Stroustrup:

That's an odd question. To me


"most enjoyable experiences" almost per definition
have to involve people. However, my first serious
work with SIMULA must qualify. Being able to write
code in a language where the concepts could be expressed directly, rather than just through a bunch of
built-in constructs, was liberating; and so was the
SIMULA type system. The SIMULA type system is
extensible and helpful rather than an obstacle getting
in my way like more primitive and machine-oriented
type systems had done. Finding that the SIMULA
way of doing things, often called object-oriented programming, scaled nicely to larger problems was a
revelation to me.
The flip-side was that the SIMULA implementations
weren't up to what I needed, so I had to re-write the
whole system in BCPL to be able to afford running it
where I wanted to and when I wanted to. This is the
root of the design criteria that led to C++. You can
find the whole story in The Design and Evolution of

C++.

Tanen~aum:
I used to hang around the
PDP-1 all the time as an undergraduate. The PDP-1
had a 512 by 512 display and the ability to plot individual pixels. Before long somebody had written
Spacewar and we used to have tournaments. The
PDP-1 had an interrupt system, then called sequence
break mode, so it could display on the screen and
punch paper tape at the same time. One of the guys
wrote some code to punch out the Spacewar games as
they were played, so the great games could be reviewed later. One day somebody invited Isaac Asimov, a professor at Boston University, to come watch
all this. He was enormously impressed, saying this
was the closest to a real space war he'd ever seen. I
was a big Sci-Fi fan, so meeting Asimov was a great
treat.
W~gman:

Probably getting rid of the last bug.


It's either that or playing "Adventure;" I ' m not sure
which. (laughs)

2O

WuI"F: I can tell you when I think I reached my


pinnacle of programming talent, which was when I
was programming tbr the IBM 650 and managed to
meet a challenge to write a drum-clear routine on
only two Hollerith cards--just because it was perversely clever code. The 650's primary memory was
a magnetic drum. And the trick was to clear the entire
drum, including the memory that the program itself
occupied, and do it with-well, you could get five instructions per card-so with ten instructions total;
which had to include reading in the cards themselves,
that sort of thing. As I said, it was a dastardly clever
program.
Most enjoyable; there are so many different dimensions to answering that question. In terms of personal
pride, it had to do with developing an operating system in the Seventies called Hydra, which ran on a 16processor system that we built at Carnegie-Mellon. I
think that the most revealing moment for me was
watching Doug Englebart demonstrate the Knowledge
Navigator back in the early Seventies, and really, all
of a sudden, recognizing that machines were not just
for computing numbers. And recently, I guess I've
had a similar sort of transforming experience watching a bunch of humanities scholars discover information technology, and seeing how it could be used in,
for example, history and English. I ' v e had many
pleasurable moments. I don't know that I have a single one that I can name.

Do you have a favorit;e compiler?


Why do you like ill? Which compiler
has been your leasl:, favorite and
why?
Aho:
Probably AWK-it's an interpreter-because I
use it every day in running my life. Most of my computing life is controlled by short AWK scripts that
make my interaction with computers both personal
and pleasurable.
I ' m not sure I have a least favorite. If I don't like a
compiler, I don't use it, so I haven't had enough time
to form an animosity towards any one particular compiler.

Allen:

Well, of course compilers have improved a


lot over the years .... My favorite compiler-and I ' m
going to sound like an old fogey if I keep harking
back to those early days, but it was a great time-was

P r o g r a m m i n g Languages, F a s t P r e s e n t and F u t u r e

the first FORTRAN compiler. It was one of the great


accomplishments in computing. I've actually given
some talks on that, and a lot of people agree with me.
I wasn't associated with it; it was done by John
Backus. It established the paradigm for compiler construction, and organization that has persisted almost
to this day. The reason I like it is that it invented so
many things. And it set the standard and paradigm for
construction of compilers which persists to today.
It was an extraordinary piece of work; the focus of the
time was on being able to exploit the hardware, and
no one believed that you could write in a higher level
language and get performance. But they did it!
I think I'd rather not say which is my least favorite.
(laughs)
Osnni.~:
No, I don't have a favorite compiler.
I've used a C compiler. I am currently using a VHDL
compiler-VHDL is a hardware description language
that carries over many characteristics from Ada, including Ada packages, and much of the type system
and the syntax is derived from Ada-and I ' m enjoying
that compiler, but the language stinks. Let me say,
however, that the language is very good for the purpose for which it was designed, namely modeling
hardware, but I ' m attempting to use it as a specification language, for which it's not so good. The other
compiler I ' v e used substantially is for the Sisal functional programming language. If I were to pick from
among those three the one I like best, it would be the
Sisal compiler. I would say the C compiler that I used
was the one I like the least, although it was not bad.
Ferrantl~:
Well, I'd probably have to say, in
terms of a whole compiler, the PL.8 compiler that
was done at IBM-which wasn't done by me or the
group I was in, but was done by the people who did
what became the RS6000. I like it because it produces
really good code; it does a very good job on optimization. It does global optimization over the whole
program and really showed that that was an important
thing to do.
Which is my least favorite is a harder question. I'd
actually rephrase the question somewhat to give you
an answer. I ' v e been very surprised at the gcc compilers-the GNU compilers-because they don't do any
global optimization, yet for many machines they do a
really excellent job. That's been surprising. I
wouldn't say they're my least favorite, but, in terms
of evaluating compilers, maybe they have given me
the most pause for thought. The kind of code generation they do is driven by a parameterized model of
what the machine looks like, but they only do very

21

local optimizations. And yet they produce really excellent code on many machines. It turns out that the
RS6000 compiler does do a better job on that particular machine, but on many machines the GNU
compiler does a better job, and that's just surprising!

Goldberg:

No I don't have a favorite. I just


don't think of them that way. There are people who
are really into compiler structure and interested in the
various algorithms and optimizers, and I can see those
people answering this question, but I can't.
Ksnnedy:
No, I don't have a particular favorite. However, I enjoyed using the FORTRAN compilers on the IBM and Univac systems I used at Shell in
the mid to late Sixties.
You might find this an amusing story: During the late
Sixties I was working for Shell in the summers and
attending graduate school at NYU during the academic year. In my first year of graduate school, I took
a course on compilers that featured information about
how the Univac compiler performed optimizations.
This compiler included one of the first "valuenumbering" optimizers, which meant that it maintained a table of available expressions during the optimization process. When I went back to work for
Shell, I was asked by a user to look at a program on
which the compiler halted with the error message,
"Unresolvable ambiguity in source code, phase 4;
compilation abandoned." There was no clue as to
what line caused the problem or how to fix it, so the
poor user was at a loss. I noticed that the program
being compiled used a lot of subscript expressions.
Based on my inside knowledge of the compiler, I
speculated that the available expression table had
overflowed. I happened to know these tables were
flushed at each subroutine call involving components
of these expressions, so I suggested he put in procedure calls to a separately compiled dummy at several
places in the program using some of the subscript
indices as parameters. Although he was skeptical, he
tried it and was astonished when the error message
went away. After that the users at Shell came to think
of me as some kind of wizard.
Since compilers are the subject of my work, I don't
have any particular one that I really like. Among
compiler implementation projects, however, I really
admired the original FORTRAN I compiler effort, led
by John Backus, which produced one of the great
achievements in computer science.
Almost any C++ compiler is pretty bad, because the
language is so complicated its very difficult to get

SixtEen Prominent Computer Scientists

A s s e s s O u r Field

things right. But no, I don't have a least favorite, either.

Liskov:

I've developed a number of compilers


myself. I like languages that have strong typechecking so that the compiler can catch lots of errors.
But there's no particular compiler that comes to mind
as a favorite.
MaGQbleen:
Yes I do. It's the Standard ML
of New Jersey compiler, since I've spent a good deal
of effort over the last few years implementing this
compiler in conjunction with colleagues like Andrew
Appel at Princeton, and various colleagues here at
Bell Labs. This is, for the time being my favorite
compiler. This is a compiler for the language Standard ML, which is a modern polymorphically typed
functional programming language which came out of
research at Edinburgh University.
I ' m extremely biased, of course, because I had a hand
in the language design, and in the design and implementation of the compiler and related systems.
It's hard to separate one's attitude toward compilers
from one's attitude toward the programming language. I think it's more the programming language
than the compiler that colors my opinion. I ' m not
keen on low level languages like C, for instance; I
find that it makes programming much more difficult.
It provides much less support. So I guess my least
favorite compiler would be any C or C++ compiler.
, ~ a ! 1 , 1 m t g t : I don't have a favorite compiler.
S~thi:
No. It's more the language, and whether
the compiler gets the job done. It's looking past the
compiler, to what the experience is. Really, in language implementation there are compilers, and there
are also interpreters. Some of the languages I've used,
like Smalltalk, ML, LISP, Scheme, these were interpreted languages and with a different interaction than
with a compiler. So it's not so much a compiler issue,
it's more what the language implementation is, and
whether the compiler allows you to make the most
use of the language.

, ~ i l l i t h : Yes, I do have a favorite; it's the compiler we're developing here for the Tera machine. It
doesn't have a name. It's a very advanced compiler
for FORTRAN, C and C++ for the Tera MTA architecture. It automatically parallelizes programs and
represents a high degree of sophistication in doing
that.

22

My least favorite? That's a hard one. Oh, I know one


I didn't like at all; that was a COBOL compiler for
the CDC 6000 series. It just didn't work very well.
,:~tee[e:
Yes .... Curiously enough, my favorite
compiler is one I have never used. It's the Bliss-11
compiler. This is the implementation language developed at Carnegie-Mellon University by Bill Wulf and
his students and colleagues. And it was eventually
used for systems programming within DEC quite extensively for awhile. The reason I like it is because of
a book The Design of an Optimizing Compiler which
Elsevier published, and was by Wulf and Geschke
and Weinstock and a couple of other authors
(Johnsson and Hobbs). This book came out in the
mid-Seventies, and it was such a beautifully written
book describing the design of this compiler and how
the parts interacted. It was the first time that I had
seen a compiler so clearly explained that included for
that time fairly advanced optimization techniquescommon subexpression elimination, peep-hole optimization, all kinds of stuff. And the book was clearly
written enough that I felt, having read it once, that I
could go away and duplicate the work myself. And in
fact I did later use some of the register allocation
techniques in a LISP compiler I wrote myself in
1982, and I ' m pleased to say that based on my memory of having read that book I proceeded to code it
up, and it just worked, and did a good job. The book
had a strong influence on me; it's too bad it's out of
print. I consider it a classic.
None of them stand out as being really awful. They're
tools. None of them has really upset me. At any stage
in my career I ' v e made a point of being familiar with
five or six languages, and when I find the going too
difficult using one particular language, one particular
implementation, there are other choices.

Stroustrup:

I guess my own C++ compiler, Cfront, qualifies on both counts. It was a lovely
flexible tool for experimentation; easily portable to a
lot of machines; generated code that, for more than a
decade, outpaced all competitors; and some parts
stood up well to the wear and tear of language evolution and maintenance. It's very portability was a
problem though; it was never central enough to any
platform supplier for first-rate tools support to
emerge. Fortunately, I ' m not a heavy user of debuggers, but in the end, the lack of even basic tools became a burden. Also, with the standardization effort
changing C++, Cfronrs age and lack of a fullyfunded support organization became too frustrating.
Fortunately, we have better C++ compilers these
days, but I ' m still regularly amazed about details that

P r o g r a m m i n g Languages, P a s t P r e s e n t and F u t u r e

still don't seem to match what Cfront did ten years


ago.
T a n e n b , ~ t.li.Tl: No. I don't have a love relation
with any one compiler. The others would get jealous.
WsgiTl~l'l:

No. None of them have everything

fixed. There are features I like in some that don't exist in others, and in some sense I want the union of
everything.
W L I l f : Well, of course, the one that I wrote. I designed a language called Bliss, that was originally
designed for the PDP-10, but really hit its stride with
the PDP-11. It became DEC's internal system implementation language; it played the role that C plays
now. And it was intended for system implementation;
operating systems, that sort of thing. This was in the
era when macho programmers believed that compilers
could not possibly produce code as good as they
could write in assembly language. So, in fact, the
object of the Bliss-ll compiler was to be a very
highly optimizing compiler that was competitive with
what humans could write. And we pretty much succeeded in that. So that Bliss- 11 compiler has got to be
my favorite. Modestly, I have to say it's still one of
the better optimizing compilers around. We didn't
have much theory in those days, so it was done in a
rather more ad hoc way. It paid attention to a lot of
details that sometimes get passed over.
My least favorite is the C compiler; it's a combination
really, of the compiler and the language, I guess. It's
not a very robust or good piece of software, and the
language was defined by the compiler instead of
anything more careful; and I suppose it's simply annoying that it became so popular despite its drawbacks.

In crms of teaching and learning


conccpt;s in programming
languages, what; language do you
1;kink is t;hc bcsf choicc and why?

A[[en:

I ' m very enthusiastic about Java, but not so

much from a language point of view, but from the


point of view of the intended use of this language. I
think it's the language for our time. I see it as being
the language for our networked world, where we are
doing sharing of code, collaboration, and still have a
need for security. Let me put a little caveat on that; I
realize that the security is somewhat questionable, but
it's definitely has the potential. It's a paradigm shift.

Del'llqi.~: That's

Ferl"ante:

That's difficult. Actually we've

been wrestling with that. What we're now using is


C++, and I believe that's not the right way to do it.
What I think is a better way is to start out with some
sort of procedural language, maybe C, because that's
so widely used, and then maybe go to a more objectoriented approach. And we've actually been thinking
of maybe, at that point, going to Java. But I think
starting out with an object-oriented language as an
initial programming language is a mistake. I think
we're doing our students a disservice. It's kind of too
much for them to grok all at once. They're being
asked to do too much all at once-to both learn how to
program and to grok all the concepts that are in programming languages. And I think they don't end up
with a good enough understanding of why one uses
those kinds of features.

(~old~erg:

Aho:
I ' m not sure that the programming language,
per se, is essential for teaching and learning concepts.
I think every programmer needs to learn ANSI C, and
probably C++, because they are the two most popular
languages in the world. But I think you can learn programming language concepts and how to write effective programs using almost any language.

23

a tough question. At MIT we use

Scheme in our undergraduate teaching. I have always


been in favor of strongly-typed languages, which
Scheme is not; but I understand the value of teaching
Scheme because it serves as a vehicle for describing,
and introducing students to, a wide variety of programming constructs, including abstract data types. A
new language has come on the scene, and one should
not underestimate its importance. And that is Java. I
would not be surprised to see Java become the language of choice for education, and a lot of other
things in computer science.

I prefer a very clean language; not

a hybrid one. I ' m not interested in programming, I ' m


interested in system building. The languages I ' m interested in are the ones in which you can easily declare the structure of the system you are building. As
far as I ' m concerned right now, any Smalltalk is preferred to what else is available. There are things I can
tell you that are wrong with Smalltalk, I understand
that. But right now given what other languages provide, I still prefer it.
Kennedy:

It depends on what you want to achieve.

At Rice, we teach students introductory computing

Sixteen

Prominent

Computer

Sclenti~t~

A~es~

concepts in Scheme. Then we introduce them to object-oriented programming in two phases; first we
teach them Java, which is a safe language and a little
easier for them to deal with than C++; and finally we
teach them C++.
C++ is a very powerful language, but it is also very
difficult for beginning programmers to use well. As a
first language it is not suitable for students, although
we need to teach it eventually because it is the language that many students will see in practice after
they graduate. Java is a better way to teach objectoriented programming, because it is safe and has a
garbage collector. In Java, it is not possible to make
the incredibly subtle errors that arise from accidental
deallocation of storage that is still in use. As I said
earlier, I have written several programs exceeding a
hundred thousand lines of code, and storage management problems have been the most difficult to
locate and eliminate. C++ is full of opportunities to
introduce errors of that type.
L i s k o v : Well, the language that I teach still is the
language that I developed myself, called CLU, and I
teach that in a course on program methodology for
developing large programs. I use it in that course because it supports the concepts that I teach in the
course in a very direct way.
CLU stands for Cluster, and cluster is a construct in
the language that allows you to define data abstractions. I find that when I teach the course in CLU, the
students have an easier time coming to grips with the
concepts than they do in some other language where
they often have other types of problems they have to
overcome.
M a G l ~ l . l e e r l : I think a language that allows one to
reveal principles in a relatively uncluttered and nonidiosyncratic way is preferable. I think Scheme has
been quite successful and is quite attractive as a language for teaching programming. Certain aspects of
programming can be illustrated very nicely with
Scheme. On the other hand, I think Scheme is rather
one-sided, because it doesn't allow one to teach the
principles of types and data-abstraction very clearly at
all. So it is excellent from one side-for the control
structure and for basic functional program construction-but it falls down on types and interfaces. So, a
lot of places use ML as a modern high-level language
that has an excellent type system and excellent support for data abstraction and so on. Perhaps some
combination of Scheme and ML is ideal.

24

O u r Field

~ammet:
At this time, I would say Ada is the best
language for teaching, because it has an enormous
number of effective features. In order to write Ada
programs, you almost have to write reasonably decent
code. That is to say, it forces you to do some things
that are proper, that other languages don't force you
to do, and sometimes don't even allow you to do. Ada
is also the best language for software engineering.
And software engineering, of course, is the current
buzzword for people who want to write programs that
are efficient and correct. I ' m not saying you can't
write bad Ada programs; you can. But there are a lot
of facilities in there that allow you to write better applications than you can in many other languages.
~thi:
It depends on what you want to teach. I'll
give you a specific example: in the programming languages book I have written, in the second edition, the
early part of the book uses C++ and Pascal, and the
reason is that C++ and Pascal are used so widely that
it is good to be able to expand on peoples' experience. But at the same time, there are also functional
languages, there is logic programming, there are a
variety of languages, and it depends on what you
want to teach. When studying type checking, for instance, functional languages have been very convenient. So really, it depends on what you want to teach
and what level of people you are teaching.
,~191ith: I think I'd choose Scheme at the moment.
The reason I'd choose it is that I think it's possible to
explicate more concepts in programming languages in
Scheme than in any other language, while keeping the
language simple. Although the language isn't complex, its flexibility and richness is great.
~teele:
Well, let's see .... I ' v e got two different
answers to that, and the way you answer depends on
the qualifiers in the question. Speaking globally, I
think the most important thing about a programming
language is that the teacher doing the teaching be
genuinely enthusiastic about it and its concepts. And I
think that there are lessons to be learned from any
language, if they are explained properly; if the lecturer does a good job of it.
If I were teaching myself, my favorite is still the
Scheme language, simply because it allows you to
explain a very large class of interesting concepts,
many of which don't appear in other programming
languages, with a minimum of mechanism and formality.

Programming Languages, P a s t P r e s e n t and Future

gramming language, making the transition early in


their academic career is difficult and time-consuming.
C++ is a clear commercial winner, so rather than have
students learn a cleaner language, like Modula 3, we
have bitten the bullet and decided that we would use
C++. There's a pragmatic answer to your question,
which is not which is the best, but which is the one
we have decided to use for a whole variety of reasons,
and that is C++. Which is the best? I would probably
use Modula 3.

Str0ustrup:

This is a hard question. Who are we


talking about teaching programming? Why? And for
what? I don't see why the same language and teaching method should be the best for high school students, people aiming at becoming professional programmers, people wanting use programming as a tool
for scientific computation, etc.
First of all, I think we must keep in mind that the real
task is to teach and learn programming, design, and
system building techniques in general. A programming language is essential in this context. I do not
think that we can learn programming in the abstract.
We could just as well try learn ice skating by a correspondence course. However, the programming language is still secondary to the programming. That, in
my opinion, is often forgotten and people teach programming language features instead of programming.

In your experience wit;h indusl rial


applicat:ions, which language have
you found to be best and why?

I guess training aimed at producing professional designers and programmers is the easiest. You can assume they will spend sufficient time and effort for
real understanding - at least you ought to be able to.
In this case a functional language (say ML or
Haskell) plus a dynamically-typed language (say
Smalltalk or some LISP) plus C++ would seem right.
Leaving out C++ in favor of, say Ada or Modula 3,
would simply ensure that the student picked up a
messy mix of C and C++ indepdndently.
For professionals in other fields that want to learn
enough to use it in their main line of work, I suspect
the best method is to teach whatever they will use.
That would often mean FORTRAN, Visual BASIC,
C, or C++. In all cases, I suspect the teaching would
be heavily biased towards use of popular tools and
libraries.
For most other kinds of students, I doubt that I have
sufficient direct experience for my opinion to be of
interest.
m~lnellbaU111: Probably C, but I wish Dennis had
gotten the operator priorities right.
Weglllan:

I don't think I have a good answer. It's

been sufficiently long since I've taught that I ' m not


comfortable answering that. 20 years ago it would
have been Pascal. I don't know what the answer is
today.
W u l f : I ' m of two minds on that. If everything else
were equal, I would use Modula 3. In fact, we've,
holding our nose, decided to use C++. And the rationale behind that is that, although I believe it is
healthy for a student to learn more than one pro-

25

A h o : We are now starting to get into issues of software engineering. And the choice of language is not
nearly as important as the process used to create
software. If you are interested in producing efficient
and portable programs, you probably can't beat ANSI
C. C++ seems to be also very high on the list in terms
of ubiquity and efficiency. For some other kinds of
tasks you may want to use more specialized scripting
languages or languages that have application toolkits
that are appropriate tbr that domain. A lot depends on
what kind of software you are writing, and for whom
you are writing it, as to what is the best language. In
my commercial experience, I think it's ANSI C and
C++, hands down.

Allen:

I've always been enthusiastic about PL/I. It


was a huge language, but it had many capabilities that
were exceedingly useful in both a commercial and a
scientific context.
DerllqJs: It's only recently that I've become involved in any industrial use, and the only language
I ' v e used has been VHDL. So I just don't have
enough experience to give an answer.
F c g r a l l t : Probably FORTRAN and PL/I, in my
experience. But I probably have a pretty skewed experience. FORTRAN, because in terms of getting
performance for applications you can really do a
pretty good job of writing straightforward code, and
there are very good optimizing compilers for it. PL/I
is one of the languages we used for implementing
some of the compilers I worked on at IBM, and it has
the feature of having been created by committee.
Hence it has every feature you can imagine in it, and
so you just sort of carve out your own subset of it,

Sixteen Prominent Computer Scientists

A s s e s s Our Field

and use it the way you want to. It stops people fighting with each other over which language to use.

, ~ a m w i ~ t : To be very honest, I haven't done any


programming in so long, that I don't think I can answer that. A great deal of my activities have been
dealing with the history of programming languages
and trying to maintain knowledge of lots of programming languages.

I say my experience at IBM was skewed, because I


was not really working on products, I was more
working on research prototypes.

Goldberg:Smalltalk, again.
Kel'lrlsdy:

I ' v e worked with industrial-strength

applications written in five different languages, and I


know of some written in other languages. FORTRAN
is used for scientific applications-such as solvers for
differential equations-and for that purpose it is quite
well suited. For systems programming, C and C++ are
most commonly used, and are very well suited because they allow the programmer to have access to
very powerful features-for data abstraction and object-oriented programming-but they also allow programmers a lot of low-level control of performance. I
have written large systems in PL/I with good results,
but it's clear to me now that C++ is more powerful.
However, C++ programs have to be more carefully
designed. Although the payoff can be great, objectoriented programs take more time to design. If someone wants to be an industrial programmer, he or she
should learn, at a minimum, C++.

Liskov: I frankly think the languages they use in


industry are pretty terrible. C and C++ in particular.
They are widely supported, and they probably will
last unless they are supplanted by Java. But that
doesn't mean they are good.
Macl~ueerl:

I really feel that the key issue in in-

dustrial applications is scaling; whether a language


supports scaling up to really large programs. And I
think that the conventional languages that are most
widely used, such as C and C++ do not scale well.
They are rather fragile; they have limited support for
modularity, for data abstraction; they lead to badly
structured programs when you scale up to industrial
sized systems. In the future, I think that more modern
languages-such as ML for instance, which has strong
support for modularity and abstraction-will be used
more and more for industrial applications. But, unfortunately, the common state of the art is rather
weak.
There's been a sort of almost twenty-year stagnation
in terms of the bedrock programming technology that
is used in industrial applications. I think this should
change.

26

, ~ e t h i : I work in an industrial research lab, and


there are people in my organization who are writing a
lot of industrial applications. I think the questions you
are asking about which language is best have to be
taken in a context. Because a language implementation-the coding you are doing-does not sit in isolation. Acceptance of a language depends on what do
other people know; how does it fit into the systems
that you have. In this organization there is a lot of C
and its derivatives used. There has been some use of
functional language, ML in particular, but those are
the sort of generally accepted general purpose languages. A key thing to keep in mind is that there are
hundreds of specialized languages that get used. For
example; in my book all of the diagrams were drawn
with a picture drawing language, and it compiles into
something. There are lots of specialized languages
that Our people have used, and we use the term "little
languages" to describe them. Or these are sometimes
called application-oriented languages. These allow
you to conveniently and concisely specify tasks to be
done, and then they compile into something, be it C
or something else. So I think you have to keep in
mind that a language doesn't mean just C or Pascal,
or LISP, or Smalltalk, or whatever. There are all
these specialized languages. I mentioned the text formatting, because this was an unusual usage of languages. These applications began appearing around
1975.
Now, of course, with personal computers, the metaphor is changing, where it's much more interactive
and what you see is what you get. Rather than language based. What I ' m thinking about is the fact that
there usually is a host language that is used for our
applications, and the host language is often C or some
variant of it. And on top of it are many languages that
might be used: some for specifying database consistency, some for specifying timing constraints or
things, so there are a variety of languages in which
the job you want done gets specified. And then these
can be compiled or translated into whatever the host
language is.
~ l T l i t h : I think it depends a lot on the industrial
application; I don't think there's any one language
that is best, by any means.

Programming Languages, P a s t Present and Future

~teele:

That's too broad a category to narrow

down to a particular language. I have been in industrial consulting situations where for some purposes I
have recommended using LISP, in some cases I recommended FORTRAN, in some Pascal, and in some
cases TECO, which was a programmable text editor
of 20 years ago. There was a case where the problem
was to convert a PDP11 assembly language program
that had been written for one assembler, and needed
to be compiled by another assembler whose syntax
was different, and it was a one-shot conversion job,
and maintainability of the conversion routine itself
wasn't an issue it was getting the job done. And for
that, writing a pile of macros for a programmable text
editor was the way to go, even though as a programming language it was really awful. What is best depends on the industry, it depends on the problem at
hand, the longevity of the program and the nature of
the task and all sorts of things.

Stroustrup:

c++,

followed by C. COBOL

where conditions demand it. C++ and C are best because of their flexibility, availability, efficiency, programming tool support, design support, educational
infrastructure.
There is much, much more that could be done in the
education areas, though. Far too many educators are
still stuck with a view of C++ as just a variant of C or
just a variant of Smalltalk. That confuses programmers and designers and hurts their performance.

T a n e n b a u m : c.
Wegman: At this point C++ and Smalltalk are the
best. Because they exist on all platforms.
W t l l f : Again, you're going to get a complicated answer to that question. I don't think language is a firstorder issue in software quality. It's an issue, but I
don't think it's the first-order issue. So again, I think
there are pragmatic concerns-number of programmers available, ability to interface with code written
elsewhere-that drive me to the de facto standard:
C/C++. In an abstract sense, I do not consider them
the "best language;" I say what I said kind of holding
my nose.

In doing your research, which


language is your favorite and why?
Aho: If I

am interested in writing efficient, portable


programs that can be used throughout the world,
ANSI C is my favorite choice. Perhaps even more
important is the software development environment. I
grew up in tile organization that invented UNIX, so I
think UNIX and ANSI C are still a pretty potent
combination.
Allel1: I ' m not actively writing programs at this
time. But if I were, I would definitely go to Java.
D e l l l l J s : The language of my choice is Sisal. I like
it because it's type-secure, and because it expresses
implicit parallelism. It's a functional programming
language, which I ' m a proponent of, and the available
compilers are pretty good.
F e P P a l l t e : Now I ' m more geared towards what my
students want to use, and that seems to be C and C++.
Their favorite is not FORTRAN, for sure! I think they
gravitate towards those languages because they grew
up with them. I actually do believe that you tend to
really like and think in the language you first learned
to program in. And probably for me that really means
using LISP. LISP or a LISP derivative would be the
language that I would choose. Some students would
go along with that, but most students haven't had that
experience, so they would choose C or C++, simply
because that is what they are learning.

Goldberg: Smalltalk,

again, of course. As I said,

I ' m interested in system building. And I like it when


you can be very explicit in declaring the structures.
Now, Smalltalk suffers from not being able to declare
layers of access, in terms of protected vs. public vs.
private, and I think that's a failing from an industrial
point of view. But from the point of view of feeling
that you can understand and maintain what you are
doing-and when I talk about building a system, I ' m
interested in long term maintenance, not something
that's just going to execute and run-I still find a pure
object language makes the most sense. Obviously
Smalltalk isn't the final answer to that, but it's still
the one I feel is best.

Kennedy: W e use C++ in our work, and we use it


because we want to get a lot of the advantages of abstraction reusability that it provides, and also because

27

Sixteen Prominent

Computer

Scientists

Assess

it is an industry standard, and we like to try to build


code that can be adapted for industry.
L i s k o v : We used to do all of our implementations
in our own languages, so I ' v e used CLU and I've
actually developed a couple of other languages. We
now actually are doing our implementations in C++,
and we've done work in C. So part of my reason for
saying I don't like them very much is because of
some of the problems we've run into. Type checking
errors; places where we've made mistakes that would
have been caught, if we had been using a language
with better facilities, but we had to find out at runtime.
M a G l ~ u ~ e n : Well, again, it would be ML, particularly Standard ML, since that's the language I
have invested my research time in. Basically, the
whole point is to try to make programming easier; I
think that's the fundamental drive in designing programming languages. This has to be done within certain constraints. You have to preserve adequate efficiency and control over resources, and so on, to allow
one to do practical programming. But again, the issues of expressiveness and scaleability are paramount
to mastering programming. There's one trend which I
don't think is particularly helpful, and that is languages that provide certain kinds of magical mechanisms. Prolog is probably a good example because of
the special built-in search strategy associated with the
Prolog model. That's a kind of magic that works very
well under certain circumstances for certain problems, but it doesn't scale well, and it's not a very
broad-competence language. I ' m more interested in
languages that are very general purpose, and not necessarily very magical.
~ a l T I t l l e t : I haven't done that kind of research.
The kind of research I have done is to try to maintain
some knowledge, some files of as many languages as
I could. For example; I now have files on well over
1200 high level languages. And FORTRAN is only
one of the 1200, including all dialects of FORTRAN.
I ' m not saying all of them are in use today; of course
they're not. A great many of them are completely
obsolete, and a number of others have had very small
use. Some of them are just research languages that
have been used at only one college or university.
S e t h i : It depends on the application. I ' v e used ML,
I ' v e used Smalltalk. there have been times I've done
prototyping in one language, and implemented it in
another. There are languages I'd like to get to that I
haven't used very much. There's a whole new crop of

28

Our Fidd

languages like Java. People here have been using


TCL/TK for user interfaces; or Visual BASIC.
There's a whole crop of languages to explore that I
haven' t gotten to yet.
, S m i t h : I tend to use a variety of languages. I tend
to use LISP-Scheme, really-and C++ in more or less
equal amounts. And I enjoy using those languages.
, S t e e l e : I use a mixture of stuff. I think you're
asking a carpenter which is his favorite tool, and if he
could only have one, which would he choose. Right
now I keep a LISP at hand, I use Java, I use C, if I
had a SNOBOL around, ready and convenient, I
would use it. Unfortunately it isn't much available
any more. That's good for occasional quick text processing hacks. I use the EMACS text editor, and occasionally write programs in its LISP-like language.
The fact that it's LISP-like is less important than simply that the text editor be programmable.

Stroustrup: c++.

It is my main tool and I have


spent years getting it to the point where it serves me
very well. This reflects my interest in larger systems,
in systems that place demands on time and space, in
portability, and in systems that cross traditional cultural barriers. An example of this type of system
would be a financial system which needs to combine
data processing, telecommunications, scientific computation, and graphics.
T a l l e t l b a i , l t ' r l : c. It's not perfect, but it is very
flexible and efficient.
W e g m a n : Probably C++, Smalltalk, and add LISP.
The reason is that Smalltalk and C++ exist on all platforms, pretty much, and they're well supported commercially. And the reason for LISP is simply because
I know it.
WUIt:: I've drifted off language research, per se, and
I ' m doing computer architecture at the moment. We
write in C or C++ simply because of its wide acceptance. It's not my favorite; it's almost my least favorite from an aesthetic or intellectual point of view. And
I don't mind being quoted on that.
Probably the language I've most enjoyed in that regard is Griswold's language, Icon.

Programming Languages, P a s t P r e s e n t and F u t u r e

Given your experiences in the fidcl,


do you have a favorite
programming language? Why is it
your favorii e ?
A h o : I think I may have answered this already; since
I created A W K and use it every day, it' s my favorite.
All~itl: No, I don't really think I do. I think that there
are languages that have matched the application area
and the skills of the developers that are going to use
them. I think those are qualities of a language that are
very important.
D e r l t l i 6 : Sisal, for the reasons I stated earlier.

Ferrante: I ' m intrigued by Java like everybody


else. I really haven't done much programming in it, so
I can't say it's my favorite. I guess I'd really have to
say LISP, or a LISP-like language. I like LISP's simplicity, and its structure. Everything is a function. But
I think, really and truthfully, it has to do with the fact
that LISP was my first language; I can give you other
reasons, but I ' m not really sure they are true. I think
you learn to think, in some sense, in the first language
you learn; so I think it really is an important choice
for what language we use to introduce students to
programming.

Goldberg:

If I didn't say Smalltalk again, I'd

sound pretty dumb, huh? That doesn't mean I'd


choose Smalltalk for everything I'd do. I mean, there
are issues of integration with other things going on
where some things are easier than others; there're
issues of real-time imbeddedness and other things.
Languages are a form of expression; it kinds of depends on what you are doing. But for what I do,
Smalltalk makes the most sense.
K e i ' l l l e d y : As I have already said, it really depends
on what I am trying to do. If I were implementing a
differential equation solver, I ' d probably write it in
some variant of FORTRAN, and if I were building a
compiler, it would be in C++ because of the industrial
factor.
I think Java is a very nice language and it's going to
become even more prominent. It's safe, it's pretty
simple, it's understandable-although the compilation
model proposed by Sun, with an interpretive virtual
machine, does not allow the kind of performance

29

many people want. The language itself is quite powerful and could be compiled directly to the target machine to achieve high performance. I like it because it
makes it easier to write programs in an objectoriented style that you can debug with reasonable
confidence, and you don't have to worry about the
errors of accidental deallocation that are common in
C++. You can write programs in Java and debug them
more quickly. It's missing some of the more powerful
features of C++ but the combination of features and
simplicity in Java strikes a good balance.

Liskov:

I wouldn't say I have a favorite language.

Of course I like the languages I've developed myself,


but I have some hope for Java. It has a number of
unfortunate features, but it's on the right track. And
its the first time I've seen a language, where there's
some hope it might be widely used, where they actually did a tot of things right.
Mact~Ueell:

At the moment Standard ML is my

favorite language, because it is arguably the best attempt so far to combine security and expressiveness
and for many tasks it makes programming far easier
than conventional languages. In the future there's
plenty of work to be done. One issue that needs to be
looked at is; what is the relative role of ideas from
functional languages and their associated type systems, and object-oriented languages and their associated type systems. Do these two paradigms integrate
well? And what does each have to offer? That's one
area of current research that I ' m involved in.
, ~ a l T I I l l e t : I have a very soft spot in my heart for
COBOL, because I was on the original C O B O L
committee and I was one of the key people involved
in developing that language. It's my favorite because
I helped to create it.
~ethi:

I don't think I have a favorite programming

language. I used to, and then, when I began using


more of them and got to really use and understand
them, and got to doing my book, I realized that there
are a lot of concepts that are in the different languages, and some of them carry over from one to another.
There are going to be new languages. The recent use
of Java-and locally there's a language called L i m b o are examples. Think of it this way: in the Seventies
we spent, as a community, years debating the merits
of the GOTO statement; structured programming.
There's tremendous experience that we as a community have in the use of imperative languages like C
and Pascal and going beyond that A L G O L and all;

S i x t e e n P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s O u r Field

there's been this uninterrupted line of research. We


haven't had anywhere near that level of scrutiny, or
that level of experimentation with languages that
work across networks, that are concurrent, that allow
you to work with multiple processes, so I think just
looking at how much we have done with one class,
and have not explored in another, I think that's
enough to conclude that over the next few years we
are likely to have more languages explored. Java is an
example of something that has this tremendous marketing that has been done about it so that there are
people who know Java even if they haven't used it,
but it's an example of using languages for different
purposes than what they have been used for before.

Smith:

No. I think programming languages need to


become better suited to parallel computation, and I ' m
not happy with any of them in that respect.
, S t e e l e : Well, I think again I would probably have
to say Scheme, simply because having become thoroughly grounded in language theory, and in particular
in the lambda calculus which underlies it, Scheme
becomes closest to the way we do everything that
lambda calculus lets me do, and most other programming languages are some kind of compromise with
that generality.

,:Gtroustrup:

That's hardly a fair question to ask


me; if I say C++ it sounds like self-advertising or
even narrow-mindedness. If I point to another language, that mention will be used as an argument
against me, C++, and its use. Anyway, for most of the
range of things I have to think about, C++ is the best.
The exceptions tend to be either very specialized or
be projects where the people and infrastructure are
already in place.
I designed C++ because I felt I needed a tool like
that, rather than because someone told me to design a
programming language. I chose and refined the design criteria, made the design decisions, wrote the
first compiler, and wrote the reference manuals. Up to
the start of the standard effort, every word in the reference manual was mine (except for those I had inherited from Dennis Ritchie and C, of course).
Naturally, I had lot of help from friends and colleagues. There never was a C++ design project, and
during the development of C++ nobody ever reported
to me. However, dozens of people made major contributions to C++. You can find their names in "The
Evolution of C++," and in the long paper on C++ I
wrote for the ACM History of Programming Languages conference, HOPL2.

30

Later, when the formal standardization started, the


circle of people I worked closely with widened, but I
still feel that I guided the development of C++ effectively. I was the chair of the ANSI and ISO C++
standard committee's working group for extensions,
and in that capacity I led the work on every proposal
to extend C++, or to change it in any major way. No
major feature I didn't like is in C++, and no major
feature I felt necessary is missing. I led the work on
extensions and wrote the analyses and final proposals
that the committee voted on. Of the major features of
C++, only namespaces and run-time type identification arose in the standards committee-the first proposal for RTTI was from Dmitry Lenkov-and even
those had their roots in problems I had tried to solve,
but couldn't, earlier. The rest of the major C++ features, including templates, exceptions, and multipleinheritance were in place by the time the technical
work of the committees started.
This doesn't mean that the committee didn't have a
lot to do. It did, and we did a good job. C++ is now a
much better language than when standardization
started, and, of course, a better described language. In
particular, I failed to produce a standard library for
C++, though not for not trying, and the committee
accepted one. There are library parts from several
sources, including some that originated in the early
work of me and my friends in AT&T Bell Labs, but
the major contribution is the library of containers and
algorithms that Alex Stepanov and colleagues designed and implemented. Its acceptance into the standard closed a major hole in C++.

Tanenbaum: c .
W~gl,112tll: No, I don't have a favorite. It's less the
language than the environment. So, for example, there
are things that Smalltalk does better than C++, but the
resultant code is likely to be slower in certain applications. You wouldn't do scientific computation in
Smalltalk because of the speed. And there are certain
things you wouldn't do in C++ because, among other
things, it's missing garbage collection and other rudiments of modern programming languages, and the
compilation is slow. And so it's more an artifact of
the environment-which is influenced by the language-but I believe in almost most cases one can get
around with enough environmental support. I tend to
use whatever language is most appropriate for the
application.
W u l f : That would be Icon.

Programming Languages, Past Present and Future

Whai do you consider as t:he mosl;


sijnii icanC cont;ribu ion
progr mminj l nguages date?
A h o : If one takes the biggie, the concept of a high
level programming language has to be one of the
most significant contributions. This work dates back
to the late 50s, with FORTRAN, COBOL, and LISP.

Allen: Well,

I do consider FORTRAN 1 as the most

significant. Now, I realize that that goes way, way


back, but because of my perspective on the industry
over the last 40 years I think it certainly is the case
that FORTRAN 1 was the most significant.
D e l 3 n i ~ : Well, there are two I'm thinking of. One is
the whole set of ideas surrounding structured programming which have been incorporated into language designs. And the other is the concept of the
type system and the development, or evolution, of
type-secure programming languages.
[ = e t ' r ' a l ' l t e : Again, I'm probably biased, but I'd
have to say it's compilers. I mean, think about some
of the first compilers that came out; for instance the
first FORTRAN compiler that was put out by John
Backus's group at IBM. It was just really amazing! It
produced really excellent code. And I think that was
just an amazing feat to be able to do that with one of
the very first compilers, where they were basically
having to invent everything as they went along, in
terms of doing such a translator.

(.~OldlgePg:

The thing that I find important about


programming languages-and especially in education
where I prefer a more declarative language, more of a
scripting language-is that people with ideas, who
want to explore those ideas, can actually construct
that simulated world and then see if they understand
what they think they understand. I ' m very interested
in non-computer professionals and their ability to
explore their ideas through simulation. Programming
languages have clearly made that possible. It's
something you can do on a computer that you actually
can't do without it. Too much of what we do on a
computer we could do just as well without it. Maybe
it's a matter of bookkeeping, maybe it's a matter of
quantity of data or speed, but it isn't a special attribute that requires a computer.
To me, one of the significant contributions is the idea
that a program is data and a program can be manipu-

31

lated. Once we got past the hump of thinking there


was a difference between data and programs, the
whole world of being able to use difterent forms of
expression-graphical and visual forms as well as
textual ones-and Ilave it be understood in a declarative language that can then be manipulated, was kind
of a big, big change. The fact that programs are data:
the LISP contribution that you can write a program
that treats a program as data. It kind of comes out of
the AI world and automatic programming, but it permits tool-building to the extreme. It permits you to,
not just do analysis of numeric information, but actually manipulate programs and change them with prograins. And that's a whole new world that opened up.
Kennedy:

It depends on how broadly you want to

interpret contributions to programming languages. I


am a compiler writer, so naturally I would consider
efficient implementation to be a contribution to programming languages because it makes it possible to
support advanced features. The original FORTRAN I
implementation effort really foresaw many of the key
problems in the implementation of programming languages, particularly the need for optimization of storage management. They spent a lot of time on the register allocation optimization, which turned out to be
critical to making the language machine independent.
So, I really do believe that the FORTRAN I effort
was a great contribution to programming languages even though one can argue that FORTRAN, particularly the first FORTRAN incarnation, was not a great
programming language. The FORTRAN I compiler
effort anticipated most of the key issues in language
implementation. In fact, Backus himself said he knew
at the time that if the perR~rmance of programs written in the language was off by more than a factor of 2
from hand code, FORTRAN would never be widely
used. Thus, I would say optimizing compiler technology, as invented by the FORTRAN 1 group, is the
most significant contribution, as it has made highlevel languages practical.

Liskov: I think the development of the idea of data


abstraction is the most significant thing that has happened-after the initial fact of having a higher language at all; which was, of course, a major step forward. The next thing was the invention of a new kind
of abstraction mechanism as a new way of structuring
programs. In a way that's an advance in program
methodology, but it also had a major impact on language design. This was developed first in CLU, and
at the same time people were working on Smalltalk.
Smalltalk is based on an earlier language called
SIMULA 67, which had features that could be used

S i x t e e n P r o m i n e n t C o m p u t e r , S c i e n t i s t s A s s e s s O u r Field

for data abstraction, although the ideas weren't


clearly worked out at that time.

combinatory calculus to explain how naming works


and how computational mechanisms work.

M a G l ~ u e e n : I think that the most progress has


been made in the area of developing sophisticated
modern type systems of various sorts; for functional
languages, for object-oriented languages, and for
more conventional languages such as Modula 3, as an
example. Over the last ten to 15 years we've developed a much more sophisticated understanding of the
foundations and pragmatics of programming language
type systems. And type systems are extremely valuable from a practical point of view because they provide a built in useful form of specification that can
actually be verified by compilers. And they are a
great aid in detecting programming errors through
type-checking and also for describing the structure of
programs by providing a language for describing interfaces of modules, and so on. So they are a design
tool as well as an error-detection tool.

The second thing that leapt to mind was ALGOL 60. I


don't remember who said this, I think it was Dijkstra;
the quote was that ALGOL 60 was a tremendous improvement over its successors. And I think there's
still a lot of truth to that.

~ammet:
That's easy: FORTRAN. Because it
was the first programming language that became
widely used. It wasn't the first high level programming language, believe me; but it was the first one
that became widely used. And the original
FORTRAN compiler was very efficient, which meant
that people were actually able to use it, and it could
become reasonably competitive with assembly language. If the first FORTRAN compiler had been inefficient, I don't know what it would have taken to get
us out of assembly language and into a high level
language.
~ e t h i " Well I think the most significant thing,
thinking aloud for a moment, is that languages have
evolved. It's not that languages began without any
history. I think I ' m led back to sort of the earliest
contribution where the break was made with machines. Initially with FORTRAN, and then immediately after with LISP. McCarthy's LISP interpreter is
a marvel.
~ l l ' l i t h : Boy, that's hard. I'd have to say, the invention of the procedure.
Stee[e:
Well, there are three ideas competing in
my mind at once, and I ' m trying to sift through them.
The first answer has to do with contributions to programming language theory, and I point particularly to
the work of such people as [Alonzo] Church and
[Haskell] Curry and Dana Scott. The whole business
of lambda calculus and related formalism such as the

32

And the third thing that sprang to my mind at the


same time was the life of Alan Perlis because he
made many contributions to the understanding of
programming languages.

Stroustrup:

I consider the most significant contribution to be the original invention of FORTRAN,


closely followed by SIMULA. FORTRAN gave us
the first example of being able to write programs in
the same terms we use for discussing and describing
the problem-for arithmetic only, though. SIMULA
gave us the initial tools to allow us to program using
arbitrary concepts represented as classes, class hierarchies, and processes in a program. C++ owes a vastand acknowledged-debt to SIMULA.

Tarlen~tlm:

The invention of C.

W c g l , llal'l: Why don't I list a few: the original


FORTRAN compiler; generically, the work on garbage collection; the work on optimizing compilers;
the work, which is largely finished, of building parser
generators. I also think the extension to objects and
browsers for objects is important.

Wulf:

That's not an easy question, because there


have been so many important contributions. Certainly
one is the introduction of what we now think of as the
object concept. And I'd have to go back to a concept
called actors. That and the original SIMULA.
SIMULA really introduced the object concept. We
didn't think of it in that way in those days, but that's
what it was.

What; do you consider t;he most;


signifTcant: conl ribul ion
compiling?
Aho: I think theory.

Its one of the few areas of computer science in which theory has had a profound
impact; particularly language theory, algorithms and
logic. I think compilers are a triumph of engineering
with a solid scientific foundation underneath them.

--

Programming Languages, P a s t P r e s e n t and Future

A I i e l l : There have been a series of wonderful results. Significant ones have been: parsing, that's a
wonderful story there; register allocation; program
optimization; analysis and optimization .... Those are
all classically beautiful scientific and pragmatic developments.

concentrate on optimizing the back-end of compilers,


Thus when I think of compilation, I think exclusively
of the back-end.
Within that scope, the advance that has had the most
performance impact, from my perspective, is optimization of memory accesses, particularly register allocation.

Derlrli~: Well there are several I might consider:


One is the basing of compiler structure on a formal
syntax. The introduction of BNF and its use as a basis
for constructing compilers is very important. There's
also the use of the abstract syntax tree as a basis for
structuring internal program representations. There's
the concept of separate compilation that was developed in the FORTRAN programming system. More
recently, there's such a variety of developments that it
would be hard to pick out the most important. But
from my earlier answers you would deduce that I regard the introduction of type checking and type inference as important developments. Type inference is
the evolution of type checking, in such languages as
ML, to the point where the programmer doesn't have
to specify the type of everything; the compiler can
deduce it.

F e r r a n t e : I'm not sure I can say there's been any


one. Certainly the kind of front-end technology we
now have is typified by automatic tools to create
components of the compiler; Lex and Flex and YACC
and Bison, and so forth. That concept, and its fruition
into real tools, is an amazing contribution. That's
certainly one significant contribution.
In some sense the thorniest problems are the ones yet
to be solved; that involve the less well-understood
aspects of compilers. There's still lots to be done. For
instance, further automating the process of creating
compilers. We really haven't gotten much further
than the front end. There's been some work on automating back ends, but I think we need to automate the
whole thing, and I think one way to do that is to use a
parameterized model of the machine to guide the
translation process, and we're really not there yet.
Most compilers are built with all that hand-coded into
them. I believe the next step is, in some sense, metacompilers, That's one level up in terms of bootstrapping yourself. To make tools to help you create compiler component tools. There's been a lot of work on
this, but it hasn't really moved into a practical realm.

Ki~l'lrl6dy: Early on we regularized the aspects of


compiling that dealt with the front end. This was a
significant set of achievements that made it possible
to automatically generate front ends, freeing us to

33

k i s k o v : Well, there you're really out of my area. If


you look at the history of compiling in the Sixties and
early Seventies, there was a lot of progress made on
getting better parsers. But of course the emphasis now
has switched over entirely to cogeneration, and I'm
not sure how to say what is most significant. The first
work was important, and what is going on now is also
important. But I'm not a compiler expert.

MaGl~ueerl: There's a lot of work on, shall we


say, conventional languages, such as FORTRAN and
C that has allowed modern compilers to take advantage of modern architectures; RISC architectures and
various kinds of concurrency such as the fine-grain
concurrency allowed in superscalar and vectorized
processors. And this technology has reached a high
level of sophistication. There's still, in terms of the
architecture the compilers have to address, going to
be a continuing trend towards concurrency that is
going to be challenging. The exploitation of various
forms of concurrency is probably the most notable
recent accomplishment in the field of compilers.
S a m l l , l e t : The FORTRAN compiler. Because it
was efficient. And it was the first one that became
widely used.
S6thi:

The concept of the syntax-directed com-

piler. I believe it was Irons who developed the concept, the syntax directed compiler for ALGOL, but it
would have to be the syntax-directed compiler.

Smith: I'd say dependence analysis.


S t e e l e : Its hard for me to single out any one contribution. I can point out specific important influences
on my own life. That has to do with the accident of
the order in which I encountered things. The discipline of compiling has been built up over the years
with a large number of important contributions; the
theory of parsing, the theory of register allocation, the
theory of data flow analysis, and much of its still being worked out and there's active research going on
and being published in such ACM conferences as
POPL and PLDI and that work is ongoing and it's

, s i x t e e n P r o m i n e n t C o m p u t e r , S c i e n t i s t s A s s e s s Our Field

exciting, but I ' m hard pressed to point out any single


contribution as being the outstanding one.
~tPOtlStPup:

I have no idea. I do consider the

importance of parser theory vastly overrated. The


design techniques supported by a language, and the
computational model underlying a language, are both
far more important than syntax.

Tanenbaum: That has to be the original


FORTRAN compiler. It's like a dog that can talk. It
isn't that he does it so well, but it is amazing that he
can do it at all. At the time FORTRAN was invented,
nobody believed it was possible to program in a high
level language. FORTRAN changed all that.

Weglllan: The

original FORTRAN compiler and


garbage collection. If the original work had not been
done there, it's possible that it would not have been
rediscovered. Once the FORTRAN compiler existed,
people were bound to look at how to make programs
run faster. And while the exact techniques known
today might not have existed, people would have
done peep-hole optimization or God knows what;
some of it would have been obvious. These are significant in that they changed the course of history.

Wl,ll~: I ' m going to answer both what and who in


one answer, but go around the horn on it. There's a
quote from a well-known mathematician, which I will
paraphrase: "Great research results not from moments
of great insight, but from moments of great courage."
When Ken Kennedy and Dave Cook simultaneously
decided that they would do their research on optimizing
compilers
using
FORTRAN,
when
FORTRAN was very much a language that was out of
favor with the computer science community, they
made one of the most important decisions that was
ever made in the compiler field. And that led to a set
of optimization strategies that we would never have
evolved if we had stuck to our Pascal and ALGOLderivative languages, because we just wouldn't have
seen the important ways that languages are used for
scientific computation.

34

What do you see as t h e h a r d e s t


problem(s) in the area o f
programming languages? Is this
t h e m o s t important; problem or is
there one more
A h o : Again, I would generalize this to the construction of software. Here we look at the software life
cycle: two of the problems in the software life cycle
are deciding right up front how long, and how much
will it cost to produce a software system. This may
not be a programming language issue, per se, but
these are certainly two of the most important questions a customer of a software system will ask.
Writing reliable systems is very hard. I like to put this
in the context of a quality plan for the product, on the
theory that you can't retrofit quality to a product after
the fact. You have to design it in from the start. One
very important parameter of the quality plan is reliability, robustness, and correctness.

Allen:

I feel very strongly that we have not progressed very far with programming languages as far
as making them easy to use. We have some wonderful
tools now, and certainly everything on line, GUI interfaces, things like that. But in the core of writing
programs, designing large systems, I think we have
not made the progress we should have.
Developing software applications, particularly large
software applications, continues to be a major and
very hard problem. And I have always believed that
the language in which these are expressed is very key.
And I think we in the language community have
fallen dismally short in that area.

~enni~:

Two major problems: one is finding an


accommodation between object-oriented languages
and concurrency. Object-oriented languages have
evolved running on serial computers-sequential computers-and the introduction of concurrency messes up
the semantics of object-oriented languages. There is
great tension in the language design field as a result.
This relates pretty closely to the second problem,
which is the accommodation between functional programming and the imperative style of programming.
Because functional programming is clearly superior
in terms of programmer productivity, and has the advantages of implicit parallelism, my belief is that
functional programming has to eventually win out for

Programming Languages, Past Present and Future

"general purpose" computing. These are the most


important problems. Of course there is also the problem of efficiently compiling for parallel processors,
not so much efficiency of operation of the compiler,
but efficiency of the target code it produces.
F e ~ r ' ~ n t e : I think there will be much more emphasis on tools as components; components that can work
together and be integrated together. So for instance,
there won't necessarily be the notion of a compiler,
but various functions that the compiler used to per~
form will be available themselves as tools, and will be
able to be integrated with other tools. And there will
be a blurring of the compile-time and run-time dis~
tinctions; there will be more of an interaction between
compile-time and run-time; that's one thing I see in
programming language technology in my crystal ball.

Goldberg: One of them is that its too easy to have


a programming language end up being a garbage pail
for being able to do everything. I think a real hard
thing to do is to devise a programming language with
a very focused purpose. One that is extremely expressive and easy to learn for the purpose for which it was
designed. We have a tendency, as computer scientists
and language designers, to want to be recognized for
solving all the world's problems, and to be measured
that way. So what you see happen is that really nice
languages that do their job get garbaged up by having
everything else thrown into them. And that's too bad.
That's one of the important problems. Another problem is how, when you're expressing your software,
you can actually be able to create that software in a
long-term reliable way. What is it about the programming language itself; what are the constructs,
and how do you express the system architecture that
will give you some assurance that you've created the
right system in a reliable way. I suppose, profoundly,
that's the hard problem.

Kennedy: The hard problem from my perspective


is providing reliable high performance in an era when
machine architecture is changing rapidly. There's a
whole class of problems that arise because people are
building new machines with architectures that provide
opportunities for very high performance, but also
present real challenges to achieving that. The users
need reliable machine independent performance in
some sort of high-level programming language.
I believe providing a programming language and
compiler system that allows one to achieve reliable
high performance across a broad variety of these new
architecture machines is a very, very hard problem,

35

perhaps the hardest one in compiling and also probably in programming languages. But the problem is not
only hard, it is also important. If we can achieve this,
we can make it possible for people to write programs
that map to a wide variety of machines. We had that
with FORTRAN a long time ago, and to some extent
with C. But with the advent of parallel machines featuring deep and varied memory hierarchies, people
are writing programs that are quite machine-specific.
We need to get away fi'om that, if we are to see more
progress in development of programming.
[ . i ~ k o v : Well, language design is very difficult. I
think very few people do it well. Another problem is
getting new languages adopted. Goodness and durability don't coincide. People go for the language that
is well supported; where they can feel that it will be
available on all machines. And that's been an important factor on deciding which language to use, and in
which languages have become popular. I think that's
a major impediment to other languages getting
adopted. The other impediment, of course, is just
familiarity. People get set in their ways. They're used
to using particular languages; they may be a little
reluctant to change. Of course they may have a huge
code base to support as well, so they have to worry
about whether the new code will be compatible with
the old code.
M a G l ~ u e e n : Probably the hardest problem, over
all, is how to support very large-scale programs;
scaling up to very large systems. But that's a fairly
vague problem to attack. More precisely, how do we
deal with concurrency; both local concurrency and
distributed concurrency? We don't really have good
language mechanisms that tame concurrency and distribution. So I think this the hard problem, and a very
relevant problem for programming language research.
, ~ e t h i : Well I believe that usability is the main
thing for a programming language, and what's happening is that we keep applying computers to do new
things, and our user interfaces become more and more
complex. So I think some of the challenges lie in how
languages combine well with not only the computational tasks that we have really come to understand,
but also integrate nicely with the user interface.
I think we keep changing what we compute with. If
we go back into the Fifties, computation was with
numbers. After numeric computing came an emphasis
on symbolic computing. Now we have multiple devices that we use; we use mice and keyboards, and
present things graphically and visually, so how we
interact with computers keeps changing, and as we

S i x t e e n P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s Our Field

change how we interact, languages need to keep


changing too. So the challenges have to do with
making it easy to accomplish the kinds of things that
we are doing. Some of this emphasis, of course, has
been on programs that are on a scale that individuals
can manage. When you start to deal with programs
that are very, very large, so that hundreds of individuals are involved, then there is a whole different class
of problems that comes in. So maybe what I might
abstract from this thinking aloud is that one of the
hardest things-one of the things that we have not
been able to do with software-is to get the same kind
of productivity increases that people have been able
to get with hardware. In hardware there's Moore's
Law, where every 18 months or so the speed of computing, or how fast the processor is, doubles. With
software, software is still a cottage industry, in effect,
and we have not been able to get that sort of productivity increase, and I think programming languages
have a role to play there.

Talle11~au111: Getting everybody to agree on one


language. We still haven't solved that.
W e g l ' l ' l a n : I think that there are two or three things
that are going to be very very hard. One is making
programs he more flexible so that they can be used in
a wider and wider variety of contexts. You see, for
example, work on design patterns and object-oriented
programming pushing in that direction. The second
thing is that there's a lot of work to be done on languages for special domains-builders for special domains. So, for example; a GUI builder raises a level
of language within the context of a particular kind of
programming task. We're looking at what one does to
make writing enterprise programs easier. And third, I
think that combining information from multiple
sources that have an impact on the program, sort of an
amalgam of the first two ideas, is important also. I
think that the last idea is a little bit hard to follow,
but, for example, the kinds of things that expert systems are attempting to do. Where you've got information from a variety of sources about how the system ought to behave; and dealing with the inconsistencies and over-riding of rules, in the same sense that
we currently do overriding of methods, is a hard
problem. And there's also a somewhat simpler problem but still an important one, and that is to get good
performance out of the solutions to the above problems. I think that all of the above are important. And
they' re hard.

, ~ m i g h : The hardest problem is defining a language


for general purpose, machine independent, parallel
processing. This is the most important problem.

~teele:

Getting people to believe that it's a bad

idea to think that one programming language is going


to solve everybody's problem. I think that's the hardest and most important problem. Once you've
stopped trying to design a language that solves everyone's problem, you're free to design something that is
good for a restricted range of stuff. I think that's the
biggest stumbling block that I've seen for most programming language designers is the pressure to be all
things to all people.

Wuhe: I don't have an answer for that question.

Wh 3i:. is your vision of Che


programminj languages field, ten
years from now?

Stroustrup: There are two problems that I think


are the hardest. The first is details: I think we can
handle any one problem, but in the complex systems
we must deal with today there are more details that
must be gotten right than any one individual can handle; and apparently more than we know how to deal
with in a rational and economical way. We can produce very high reliability-the phone system is an example-but in most areas people seem unwilling to
pay the price for quality. Instead they want things
cheap, glossy, with a million useless features, and
now. They end up paying for the repairs, support, and
replacements.

Aho:

Software engineering right now is a very immature discipline. I think software engineering will
start to mature. I think programming languages will
be thought of in the context of the software engineering life cycle.

Allen: I don't know what it will

be. But what I hope

it will be is that the components of the applications


we develop will be inter-operable and composable.
So I think that tile object-oriented goals are wonderful. The ability to do that is going to be key to our
ability to share and reuse code, which I think is very
important. I ' m not sure that we have gotten our language systems quite strong enough yet to fully be able

The second problem is commercialism; a few million


dollars worth of marketing can sweep away thousands
of man-years of technical work.

"~.

Programming Languages, Past Present and Future

And the whole notion that the way you describe what
you want to happen has to be a multimedia event. If
you stop thinking about a programming language as
text, as tokens, but instead as multimedia, ten years is
a reasonable time to expect some results there. What
the hell is the purpose of a programming language if
not a means to boss the computer around? It's certainly not so that the computer can boss us around;
which, by the way is what some people think it is ....

to realize these goals: I know we haven't! And I'm


hoping these goals will be realized in a very natural
and easy way.
What we see today is that programmers, highlyskilled programmers are exceedingly valuable. I
would hope that we will move to a point when highly
skilled application solvers, people who can solve the
applications, will be able to work closer to satisfying
needs, rather than spending so much time actually on
the details of development.

K s t l M e d y : We've always had the notion that some


day we would get away tirom dealing with the detailed
problems on the back end, and be able to concentrate
on providing programming languages of much higher
level. Now that goal seems within reach.

Del'll'lis: One way to consider that question is to


ask, "What was it like ten years ago? What has happened?" The most significant developments in the last
ten years include the rising importance of objectoriented ideas and type systems, and the interest in
parallel computing.
First of all, there is the resistance of application and
commercial programming to change. A high importance is attached to keeping old programs working,
and being able to port old programs to new platforms,
so it's hard to see how the practical side of the field
will move in a direction that will resolve the fundamental issues I have mentioned.
I would expect to see object-oriented programming
continue in importance; it's got a lot of momentum. It
will be interesting to see how the Java language progresses. I think there's a good chance that Java, or an
evolution of the Java language, could become the
major language within ten years. Given the resistance
of the practical field to change, I would not expect the
problems that I have mentioned will be resolved in
ten years. But I would expect that the power of functional programming for a large variety of applications
on parallel computers and multiprocessor computers,
will have been demonstrated in research institutions
and universities in ten year's time.
F e l " r ' a l r l t e : I think there will be much more emphasis on tools as components; components that can work
together and be integrated together. So for instance,
there won't necessarily be the notion of a compiler,
but various functions that the compiler used to perform will be available themselves as tools, and will be
able to be integrated with other tools. And there will
be a blurring of the compile-time and run-time distinctions; there will be more of an interaction between
compile-time and run-time; that's one thing I see in
programming language technology in my crystal ball.
Goldberg:

I ' m really interested in people being

able to boss the computer around; I've always been


interested in that.

37

It is the case that we're still dealing with very difficult


problems having to do with changing machine architectures, like parallel machines and distributed computing systems, but I think we've made enough progress so that we could begin trying to build programming languages that are much more user oriented.
And I think that a lot of programming is going to be
done in scripting and graphical languages in which
people write programs that involve not just elementary operations, but the invocation of black box programs with legacy code. People will write programs
with scripts-that's happening today on personal computers. I think we'll see a lot more of it.
And again, a key issue will be the performance of the
resulting code. We've got a lot of languages at the
current level that are very good, but we need to provide people with the ability to bring together existing
programs to do computations with high performance.
So scripting and graphical interfaces to programming
languages will be very important, and those interfaces
will allow you to invoke programs written in many
different languages, without sacrificing high efficiency.

Liskov: I don't think I have a vision of what it's


going to be like, but I guess I kind of expect more of
the same. Maybe Java will in fact succeed. And in
that case, maybe what you will see is that the industry
will have switched from mostly writing code in C and
C++ to writing code in Java. I think that would be a
major step forward. The real thing people want are
advances in programming methodology. A programming language is just a toot, and a programming
methodology controls how you use the tool. Software
is still much too expensive to develop. It's too bad
that people use inadequate tools, and Java would be a
better tool; but even more important would be new
ways to think about how to do software development.

..:Sixteen P r o m i n e n t

Computer S c i e n t i s t s A s s e s s

MacQueerl:

I think the magical aura of objectoriented programming will have faded by then, just as
the sort of magical aura of structured programming
faded by the 1980's. Object-oriented programming
will have been boiled down to a certain number of
ideas and techniques that can be mixed with other
ideas from functional programming, and various advances in types and program structure, and so on. So I
think the design repertoire will be more eclectic, and
there will be less focus on magical panaceas. There
will probably be more progress in concurrency and
parallelism, and we will regard this as more routine in
day-to-day applications programming. But I don't
think it will ever become extremely simple. I think
concurrency, logically speaking, is an ingredient that
adds difficulty, rather than subtracts difficulty, from
the problem of developing software.

SaMllTlet: I think that the emphasis in the computer field has shifted from programming languages. I
don't think they are anywhere near as important as
they were even ten years ago, let alone 20 or 30 years
ago. The reason is that so much work is done on PCs
with canned applications, and while there certainly
continues to be research done on programming languages, I just think they are no longer the central
pivot in the computing field that they were for many
years. You no longer have to be a programmer to sit
down and use a computer.

Sethi:

You know, we have been so wrong in the


past. In the Eighties, for instance, we kept asking,
"What will the programming languages of the N i n e ties be." And programming languages in the Nineties
are relatively similar to the programming languages
of the Eighties; it's been fairly evolutionary. I think
part of that is because of how hard it is for people to
learn and adjust to new languages. My guess is that in
ten years we may be using some of the same concepts
we have now, even if the languages are being used for
tasks that are not recognizable. I think the applications are likely to change much, much faster than the
languages themselves.

Smith:

I think we'll see more programming languages than we have now; the number of programming languages in existence will grow. I think we will
probably see more type-safe, conservative languages
being used; e.g., Java vs. C++. That's the kind of
language I ' m talking about.
S t 8 6 1 e : I think there will three or four programming languages that are in widespread use, and people will continue to argue over which is the best, to

38

Our Field

no avail. I still think that's like arguing over whether


a saw is better than a hammer.

Stroustrup:

I ' m not sure I have one. What will


be in use then must exist in experimental form in
some lab today and will probably be incorporated
into new variants of today's popular languages.
I hope for languages that are easier to use, have more
powerful, more flexible, and safer type systems, rely
more on generators for "boiler-plate" code, and are at
least as efficient in run-time and space as today's languages. I suspect that what many will see will not be
those languages themselves, but applications relying
on them. More than today, there will be a distinction
between professional programmers and people who
assemble applications from parts. My hope is that the
professional programmers will be more professional
than is the case today, and acknowledged as such.
The way forward is through better languages, tools,
and techniques, rather than through more mediocre
and supposedly cheaper interchangeable coders, slogging on with outdated tools and techniques.

Tanen~aulTl: C++++.
W e g l T l a r l : I don't know the answer to that. I think
there's been a little bit of maturing, or alternatively
you could call it hardening of the arteries in the programming community. And I haven't seen as many
calls for radical new direction, although I'd like to see
them. So I ' m not sure that I want to say that my vision is that we will stop making progress, although
I ' m a little afraid of that. But I think if you look at
what I said were the hard problems-I think progress
in that direction would be extremely valuable.
Wl,llf: You're not going to like this answer; I hope it
disappears. The number of new ideas being introduced per unit time seems to me to have dropped
dramatically. And the relative impact of those ideas
has dropped dramatically. One of the worst things
that you can do in any discipline is to keep working
the problem past the time when it is solved, or solved
well enough. I could be completely wrong; someone
could come along tomorrow and demonstrate some
new idea that makes this sound stupid. But having a
large number of people making small incremental
advances that, by and large, don't get adopted, is not
a very wise use of the community's time.

Programming Languages, Past Present and Future

What; is the next; programming


languages paradiom i;o come?

Ferrante: Well

I guess I can give you the answer,


"I don't know what it is but it will be called
'Something FORTRAN.'"

A h o : Hopefully good software engineering; that


which can be supported by effective programming
languages. But in the software engineering life cycle,
programming language issues aren't necessarily the
ones which dominate. There are all too many examples of major systems that have failed to work, or
don't work as intended. I know of many hundredmillion dollar systems that were created and then had
to be written off because the developer was unable to
get the system to work. This is a tragic waste of
money. There are also hundreds of examples in which
software behaved very badly, inconveniencing people, and in some extreme examples, killing people.
There are a number of examples; the air traffic control system has had a number of large failures from
the point of view of systems being commissioned and
not being delivered; there are examples in the insurance industry; we've seen some very visible examples
from the telecommunications industry in the early
Nineties. Fortunately we haven't seen these kinds of
stories recently in that arena. There's the story of the
Denver airport not being able to open up on time because they couldn't get the baggage handling system
working properly. These stories are in the newspapers
with alarming frequency. So these are very significant
problems in the development of software.

Allen: It is certainly going to be a network enabling


language. I don't quite know what exactly that will
be.
D e n n i s : Who knows! I think at present we have
enough conceptual material in hand, and understand it
well enough, to do an extremely good job of programming language design. The problem in the field
is getting the act together and making the right combination of concepts available and in use in practice.
But you should never predict the absence of breakthroughs. I don't have one in mind, but let me give an
example: There's a lot of hype nowadays about the
increasing power of speech recognition capabilities,
so that one might expect that in the future programmers would program by talking to a machine. I would
not expect that to be immanently feasible. However I
would not be surprised at machines being driven by
high level commands through voice driven applications.

39

Goldberg:The obviously very next thing is going


to be intelligent agents. That's where everybody's
pushing. It's the idea that programming language is
going to be a composite of intelligent parts; intelligent in the sense that it knows how to do things, and
you just need to give it advice. If you think about a
system as a set of interacting parts-each of the parts
made up of behaviors that can be fulfilled, in more of
the classic sense that we understand it today-if you
can think of each part then as having a life: if your
agent has learning capabilities, I can then compose
my system out of those kind of agents in the same
way that I would put together a company out of people who are agents for the various roles and responsibilities that I need in the company.

Kennedy: I

do not consider myself an expert on


programming languages. However, I am an expert in
compiling languages, so I try restrict myself to that
subject. I think compilation is going to be hard, no
matter what kind of programming language you're
dealing with, because we will continue to have architectures that are very complicated. The compiler is
there to implement programming languages that can
provide the user with a level of abstraction that's appropriate, and do so with acceptable efficiency. So
compilers are supposed to make programming languages possible, and make it possible for them to add
and support, with efficiency, reasonably advanced
features.
With that in mind, I speculate that in the near future it
will be possible to efficiently implement programming languages that are much closer to the domain of
discourse of the user. An example of such a language
is MATLAB, which supports matrix calculation. Another example is a language that supports the solution
of differential equations of different kinds, using
various methods such as finite difference, finite element, or multigrid. I expect to see more programming
languages that are domain specific and can use domain knowledge to help the program run efficiently.
k i s k o g : If we could see it, we'd know what it is ....
I ' m not sure that this is a language question, but a
question about what the applications of the future will
be like. And I would hope that they are built on better
platforms than we have today; more robust.

Sixteen

F:'rominent Computer

Scientists

A~see~

MacQueell:
I don't really believe in major paradigm shifts, because they tend to overemphasize some
idea to the point of diminishing returns. I think the
object-oriented field is an example of overemphasizing a set of ideas uncritically. So I think we
will probably see continued evolution and refinement
of the various ideas we have now, and, as I said, more
of a mixing of concurrency and distribution into our
programming. And, at the extreme, there are things
like mobile computing and programming the World
Wide Web that will cause a lot of excitement and
pose a lot of new challenges.
S ~ t h i : I think how we deal with concurrent and
distributed computing is where there are possible
paradigm shifts.
Slllith:

It's hard to say. I can imagine three or four,

but it's hard to say which ones will catch on and


which won't. I'll just say it will be parallel languages
for parallel computers; we don't have any of those
yet.
S t e e l e : I ' m going to turn that around slightly, and
suggest that perhaps an important programming area
in the future is going to be the programming of biological mechanisms. For example, I envision being
able to buy a fruit designer that runs on my Macintosh, and I sit there with this little CAD program and
I design something that's yellow with pink polka-dots
that tastes half way between a strawberry and a pineapple, and then it does some rules checking, consistency checking to make sure I haven't designed
something that's poisonous, and I ship it out, and six
to eight weeks later, a packet of seeds arrives in the
mail. So that is a kind of programming. That's sort of
a concept for a very high level design package. Now
that will be supported underneath by language and
programming mechanisms for dealing with the details
of how you get something to grow. I suggest that's an
area where the techniques and ideas of programming
languages might be applicable. How's that for a crazy
idea?
We are doing similar things now with the equivalent
of assembly language or octal. I think we will develop
high level abstractions for designing biological
mechanisms. And that the means of expressing those
abstractions will look much like a programming language.
,~trou~trup:
What paradigms do we have? Procedural, modular, abstract data types, object-oriented
programming, constraints-based programming, rule-

O u r Fi~lcl

based programming, functional programming, generic


programming. There are probably a few more. I don't
think it is new paradigms we need. What would be
much more useful would be proper understanding of
the paradigms we already have, the techniques for
using them eft~ctively, better tools for supporting
them, better understanding of where the different
paradigms are best applied, and how and when to mix
them. People always seems to hurry along looking for
novelty and revolutions. My reading is that there are
essentially no successful revolutions and that real
progress comes from steady-even sometimes fastevolution. We are drowning in hype about "new
paradigms," to the point where it is harming the programmers who actually writing and testing the programs we rely on for our very lives.
Talrlellbal.,lll'l: Well, we haven't gotten to natural
language programming yet. That is a paradigm whose
time is yet to come. COBOL doesn't count.
W e g m a l l : I think it's builders for specialized languages for special applications. If there's to be a new
programming languages paradigm. Some of my recent work, although not published, says that rule engines for specialized domains will become important.
Wl.l[f: I have been wrong in the past, but if there is
an important one, my guess is that it is related to the
work that Gregor Kiczales is doing at Xerox PARC.
I ' m not sure what he's calling it; he started out calling
it "meta-object protocol." The issue is permitting
certain aspects of the implementation of an object to
be made visible to its user. The traditional object oriented approach is that you define an interface, and
how the implementation is done should be irrelevant.
But it may be the case that you can change certain
aspects of the implementation, preserving the interface, and thereby tailor the performance of the object
for its intended use. It may be a performance issue, it
might be a security issue, it might be a roll-back and
recovery issue; but the point is that opening up that
bit of the implementation probably increases the degree of reusability of the object, and that's a very
positive thing.

Programming Languages, Past Present and Future

What will the top 3 programming


languages be ten years From now?
(Are these already existing
programming languages or if not,
give their characteristics ?)
A h o : I presume by top, you mean most widely used.
An interesting question I often ask people is; take a
look at the amount of software required to support the
world's telecommunications infrastructure today. You
can make a telephone call from New York City to San
Francisco, or to London, England, or Sidney, Australia-there are many companies and organizations involved in making that telephone call go through. Take
all of the systems that they use in running their businesses, don't count duplicate systems more than once,
and add up all of the lines of code in those systems.
How many lines of code do you think have been
written to support the world's telecommunications
industry today? I did a back of the envelope on this,
and estimated that there are on the order of ten billion
lines of code involved. What most people don't realize is the staggering amount of software that is used
in society to run the world today.
The last studies that I saw of the top programming
languages showed that C and C++ had, for the first
time, displaced COBOL as the top languages in the
development of new code. COBOL still dominated in
maintenance code, and at that time FORTRAN was a
distant third. I haven't seen the figures more recently.
Maybe Visual BASIC has displaced FORTRAN as
the third most popular language. These systems which
have already been built will continue to run for a long
period of time. I have a feeling that it's going to be a
long time before C, C++, COBOL and Visual BASIC
are going to be displaced from their positions.
Now, Java represents an interesting new approach to
software development. In the old-style arena, you
tended to buy a software system. It may have more
functionality than you would ever use, but you'd buy
the system. With the Internet and the Web, you might
be able to rent only the functionality that you need,
and if this catches on with consumers, the world of
Java might be very promising indeed. But I think it's
much too early to say yet whether Java will become a
major language or not. It's certainly caught people's
attention, and there's an enormous amount of interest
in Java at the moment. I'd certainly watch Java, but at
this point it is very difficult to predict how significant

41

Java will be in the world of software at large. You


know, we're talking about many tens of billions of
lines of code, and the code that's written in Java is
only a minuscule traction of the world's software at
the moment.
Whenever I have talked to an Englishman or a
Frenchman and ask them, "What is your favorite language?," their answers are very much biased by the
culture they grew up in. In the same way, if you go
out and talk to C programmers vs. Ada programmers
vs. the scientific programming community, you get
diftbrent perspectives on software production. Then if
you talk with people who are concerned with software
reliability or the research issues in programming languages, they are somewhat ahead of what is being
done in practice, but it's very difficult to change the
language of the dominant culture. English is far from
the perfect natural language, but I don't think that in
our lifetimes, or in the foreseeable future, the US will
use a different natural language for its dominant language. Once a language gets entrenched, it is very
difficult to change it. Some of my colleagues have
become very enthusiastic about Java because of the
capabilities that it provides. But when you look at the
broad sweep of software, Java plays just rather small
niche role in software production as a whole.
I was speaking to John Cage at Sun about Java, and I
asked him this question: "Are you going to get Sun to
implement its payroll system and its accounts receivable system in Java?" And he looked at me and said,
"Those people are in a different world!" Unfortunately, it's the real world. And there you have all
these legacy systems and legacy software problems.
And although people are talking about using Java to
create wrappers and CORBA-Iike interfaces into
these legacy systems, it's going to be a long time before those systems get replaced. I think we're going to
have legacy software and legacy languages as long as
we live. And then you also have to worry about the
economics of software systems and the development
of software. If you want to buy a system from me, you
are certainly going to want to know how much are
you going to have to pay me, and how long will it
take before you get your product. And of course, you
want it to work. These sorts of issues transcend any
single programming language. This is why I think it is
important to look at programming languages in the
context of the programming life cycle. Does it really
make systems cheaper, or reduce the time to market?
If you implement them in Java or any other language,
given the fact that they have to interface into existing
legacy systems.

Sixteen

Prominent

Computer

,Scientists Assess

O u r Field

guages. For teaching there will still be languages like


Scheme. In terms of usage in industry, there will be a
object-oriented programming language of the C family, and my guess is that eventually Java will become
preeminent. And there will be a derivative of
FORTRAN that is specifically designed to do largescale scientific calculations. Finally, I believe that
some sort of scripting language might eventually
move into the top three.

A l i e n : I'll be very bold and say that one of them will


not be C++. COBOL will definitely be a player. It's
not a language that the computer science community
in this area has focused on, but the fact of the matter
is, or was a few years ago, that there are more
COBOL programmers than in any other language. I
don't think that's been surpassed by C++ yet. There
are certainly more lines of code [in COBOL].

Dennis:

If by "top" you mean the most important,


government use of Ada is very important. It may not
be one of the languages having the most users, but it's
backed by the government and used for important
applications. So, OK, here's my bid: Ada,
FORTRAN, and Java. And that's not lines of code,
mind you, that's relative importance. If it's lines of
code, you'll see C being a very important language.
You can't get rid of all those lines of code. I gather
there exists a lot of code written in COBOL, but I
think COBOL will have diminishing importance.

In a sense, all of these exist today, but they will continue to evolve over the next decade. We have C,
C++, Java, and FORTRAN, but they will be very
different ten years from now. For example,
FORTRAN will have a lot of the features that are in
more modern languages, like object orientation. It
already has supports pointer-based structures and data
abstraction now. Scripting languages also exist, but
there hasn't been a consensus on a standard scripting
language. I believe that will happen in the next decade, driven by the PC world.

Ferr'ant:

Liskov:

I have a feeling that Java will be

around. And I think there will be much more emphasis on information processing and information retrieval systems which will make our notion of languages change, in terms of the everyday person being
able to use computers. I think there will be a notion of
language that is much more like natural language that
people will use when they interact with computers.
There will still be, in some sense, underlying languages as we know them, but I think we're going to
build better user interfaces that will require languages
that are more like the way people want to use computers; and as that becomes more and more common,
languages will change so that maybe they'll be more
visual, and maybe they'll have other aspects to them
that we don't use right now.

Goldberg: They haven't been invented. They will


have the characteristics I described in my last answer.
It's a multimedia experience in terms of how you
express. And what you express is going to be a composition of the intelligent agents. But such a language
doesn't exist now. Ten years isn't that far off. But it's
far enough: we've basically been rotating new concepts every twenty years or so, and it's true that some
of this better be in research right now, or it won't
happen in ten years. But I think some of this is in research right now.

Kennedy:

It depends on what you mean by


"existing." I think the top three languages ten years
from now will all be derivatives of existing lan-

42

Probably COBOL, although I think that


COBOL will gradually die. FORTRAN will probably
continue due to the way that it has moved into the
parallel codes area. And maybe Java will actually
make it. With Java, I ' m thinking of, not how many
lines of code exist, but what new lines of code will be
written in.
[ V l a c l ~ u e e n : It depends on what audience you are
talking about. It's likely that the broad community of
software developers will remain in the C family of
languages; there will probably still be very many developers using C and C++. Java will begin to have a
very high profile in that community, as sort of a second-generation, simplified, object-oriented member
of the C family. My personal hope is that languages
like ML and that type will gain more adherents. But
for the time being, use of these languages will still be
mainly within the field of symbolic computing. It's
hard to anticipate any totally unforeseen language
designs. Although, there's always the possibility that
someone will come up with either a brilliant and successful design, or a not-so-brilliant, yet still successful design. There are circumstances outside merit and
quality that play a role in the practical adoption and
evolution of programming areas. It's often a matter of
marketing and being in the right place at the right
time, rather than technical competence.
~ a r r l m e g : Well, I think that two of them will be
FORTRAN and COBOL, simply because there is so
much existing code. So I'll stop at two.

Programming Languages, Past Present and Future

~ e t h i : You don't see much FORTRAN, LISP is


declining .... In the last ten years LISP has declined
much more than I would have predicted. It is really
hard for me to say what the specific languages will be
ten years from now.
~ m i t h : I ' d say the languages that will be in widest
use will be COBOL, C, and FORTRAN-in that order-which is a sad statement. I'd like them to be a
parallel language I haven't seen, another parallel language I haven't seen, and Java-in that order.

~teele:

One of them will be called FORTRAN


something, one of them will have C in its name, and
one of them will be called Java something or other,
but I have no idea what they will look like. They
don't exist, but brand names will be hijacked or
propagated or carried on. I can't guess what they will
look like. I couldn't have guessed ten years ago that
Java or C++ or Visual BASIC would look the way
they do.
~tr'otl6trtlp:
They will be existing languages, or
languages that are close derivatives of existing languages. Three is a hard number because it is too low.
There will be a language derived from COBOL, one
derived from Pascal (probably Ada), a language derived from C++, a language derived from
FORTRAN, a language with close integration with
user interfaces and databases, aimed at application
builders with minimal interest in programming
(probably a Pascal, Visual BASIC, or C++ derivative). If I really had to pick just three I guess I'd have
to plunge for OO-COBOL, C++, and VB++.
T a n e n b a l , l111: Probably some variant of C/C++,
some variant of Java, and some AI language.
~ N e g i T l a n : I would guess that ten years from now
the most important programming language will still
be COBOL, the next would be C++. Then Smalltalk
or Java; it's not clear which. It will clearly be an existing language. How quickly Java will climb the
chain is not clear. How quickly COBOL will decline
is not clear. The answer to this question depends on
your definition of important. If you count the lines of
COBOL written today, my guess is that here are more
lines of COBOL written today than any other language. If your definition of important is how much
money is spent on it, and how much time is put into
it, I don't know for sure, but my guess is that COBOL
is still ahead.

43

It's kind of frightening, because I remember 20 years


ago I didn't think COBOL would last that long, and I
was clearly wrong, so now I ' m kind of jaundiced ....
And given that I predicted long ago that it would disappear, I ' m a little loath to make that prediction
again.

With rcspec t:o programming


languages, what: advice would you
give somebody aboui
eni cr our
field as:
a) an underjradua e 5t;uden
Aho: From the perspective of developing a student's
c~ireer, and giving him the background to make him a
developed professional in his field, I would strongly
recommend learning software engineering and learning how programming languages can support the
software engineering process. Also I would strongly
recommend that students learn more than one language.
A l i e n : First of all, they need to understand a variety
of these languages. Secondly, I think they need to
appreciate how these languages are used. I don't think
we understand well enough the environment in which
the people who are using them every day exist. And
they should become skilled in a couple of very different languages.

Dennis:

Learn as much as you can about every-

thing. Get a broad education.


Ferrante:
I would say, learn the basics. If they're
going to be in the field and going to go off and get a
job, they need to be able to understand the concepts
and be able to solve problems using programming
languages. And they have to have a good understanding of systems, that will remain flexible as they
go on.

Goldberg: I

have a very strong feeling that it's


important that people in, not just undergraduate, but
graduate education as well, understand that what they
have to learn is not how to program, but how to build
systems. And they have to have a profound understanding about complexity and managing complexity;
understanding what's inherently complex, understanding what's unnecessarily complex so that they're
really measuring their design capability. Students

Sixteen

Prominent

Computer

Scientists

Assess

really have to get a handle on that. The rest of the


things they study fall straight forward from that. For
the undergraduate, on top of that, it's very important
that they understand that a computer is a tool and it
ought to be a pretty natural part of just about every
subject area they're involved in, in such a way that
they can choose not to use it. Right now, people kind
of artificially use computers in some things.
I#,.el'ltledy: I would encourage them to understand
the general principles underlying programming languages. In my career I ' v e used five different programming languages, and I wouldn't have been able
to do that if I hadn't understood basic principles. In
addition, since every program will be in some sense
the implementation of a programming language,
every student should know a lot about how to implement programming languages, because user interfaces
will require it. I don't know that there's more advice
to give them, because they will be taught what their
university believes are representative programming
languages. Every student wanting to go into the industry today will need to know C++ and probably
Java. And as I said, eventually Java will replace C++
or there will be some sort of fusion.

Liskov: The advice I give undergraduates is: the


language is not as important as the concepts. The
most important thing is to learn to think clearly about
programs; how to design them, how to implement
them effectively, how to reason about them. And a
programming language is something you can always
pick up later. So I really disagree with an educational
philosophy that teaches particular languages so they
can go out and get jobs.
[ V l a g ( ~ t l e e n : For undergraduates, I would advise
trying to learn the principles: trying to not get too
caught up in the technicalities and the nitty-gritty
complexities of particular languages or environments
or systems. And not to concentrate on the ephemeral
mastery of today's technology, but to try to understand the lasting and fundamental principles that will
continue to be applicable to programming languages
and the process of programming over the next several
decades.

O u r Field

they understand Ada. If a person knows Ada very


well, they are in a better position to pick up and use
and study other languages.
S e g h i : I believe that although there are hundreds of
languages, the concepts that they are based on are
relatively unchanging. And so getting a firm grounding in what the basis of a language is and how they
are put together is very helpful, because if languages
are going to change, then people need to be able to
adapt, and it helps to adapt if you know the basics,
and don't end up becoming a religious fanatic for any
one language. (laughs)
S m i t h : What
dent, if they're
themselves to a
as many as they
SteS[8:

I'd suggest to an undergraduate stuinterested in this field, is to expose


variety of languages, and try to learn
can.

If I give them advice, I'll be telling them

how to come out the way I did, and I don't think we


need more clones of me; we need people with different ideas. I will recommend the same thing I would
recommend to any other student, and that is to experiment; try a broad range of experiences; try a
bunch of stuff, study economics, study government,
study natural languages, study the history of Japanese
civilization. You never can tell what will be relevant.
And the trouble with broadcasting advice is you get a
lot of people aligned, and what we really need is a
whole bunch of people to try a bunch of things.
Strou~trup:

Learn at least three programming

languages well. That means completing at least a


month-long project in each. Preferably these projects
would be aimed at producing a working system rather
than simply pleasing a professor's notion of aesthetics. Wherever possible, use part of your summer to
work on these projects; it can help finance the rest of
the year. Also, get into the habit of describing what a
program does, and why, in plain English. It doesn't
matter what language a program is written in if the
description of its purpose and use is incomprehensible.

Tanenbaum:

Be sure to spend lots of time pro-

gramming.
Sartlm~t:
I'll deal with both the undergraduate
and the graduate at the same time. I think the key
language for them is Ada, because it is a very large
language, and has a great many features and paradigms of which a few show up in other languages. So
my advice to both of them would be that that is where
they ought to concentrate their efforts and be sure

44

W 6 g l g l a l ' l : They should learn to write their programs so that they are maintainable and modifiable
over time. I think there are ways to do that in COBOL
and the more advanced languages. They should learn
from the more advanced languages the techniques
that they are going to apply.

Programming Languages, Past Present and Future

W u [ f : Learn as many different languages as you can


because they represent different paradigms of computing, and your vocabulary of problem solving approaches will be enriched by doing that.

With rgspecb to programming


languages, what a d v i c e would you
give somcbody aboug enter our
field as:

b) a graduaCc sCudenC
Aho:

If this is a Master's student, I would give the


same type of advice I would give the undergraduate
student. If this is a Ph.D. student, then it depends on
the research area he is in, and then my advice would
be the same as that for the researcher; that I hope
there would be some potential von Neumann who
would look at this issue of software engineering, and
particularly this issue of software reliability-I have a
talk whose title is, "How Reliable Can We Make
Software", and we're nowhere near having the same
kind of understanding of constructing reliable software as we have of constructing reliable hardware, or
of conducting reliable communications over a noisy
channel. So I think there are very difficult, but very
important research issues associated with the production of robust software.
A [ l e l l : I guess I would say the same sort of thing I'd
say to an undergraduate.
D r l r l i s : Use the language which is most appropriate for your research objectives, which could be a
wide variety of languages.
F s i r ' r ' a l l t 6 : I would say, question everything. Try to
get out of the usual mode of languages as we know
them, and try to think about the future.
G o l d ~ e r g : For graduate students, the dichotomies
between programming, tool building, and system
building-I think they need to understand that better
than they do today. And kind of mixed in with that,
and I'd probably start this at the undergraduate level,
people really shouldn't think of themselves as learning and doing in isolation. They have to have a better
handle on what it means to take on responsibility as a
member of the team.

45

Kennedy:

I think they need to understand things


on a broad scale, and they need to know a lot about a
wide variety of programming languages. But I think
they also need to understand that if they're going into
the implementation area, they have to work with users
who are committed to a certain language; there are a
lot of people who've sneered at FORTRAN, for instance, but there are a large number of users for
whom it's an ideal match, and they care about performance. The design of new languages is often not
the solution to a user's problem. They often have a
different problem, which is getting the right facilities
available in the language they want to use.
L . i s k o v : I would probably discourage them from
entering the field. It's not clear that the work done in
the research area has that much of a practical impact.
It might be better off thinking about other things to
work on. If you're going to develop programming
languages though, it's good to have a very solid grasp
of what they are good for. So an awful lot of programming language research is teatures in search of a
need.
M a c l ~ u e e r l : I think my advice for undergraduates
applies here also, but if graduate students are specializing in the programming languages area, perforce
they must concentrate on fundamentals and innovation rather than just pursuing and learning the ephemera of current day technology.
S e t h i : I think there I would say that it's very important to look at, if you are designing a language,
who are you designing it for; what are the applications; how will the language be used. I ' m thinking of
some of the applications I've seen here at Belt Labs.
There's been a tremendous amount of excitement
generated by some of the visual interfaces that people
have been using. And easy-to-use interfaces have
expanded the use of computers and have given increased ability-have given increased power to a
whole lot of people. So I guess for graduate students I
would say, "Watch out that you are solving the problems of tomorrow rather than working on the hard
problems that remain that may not have a whole lot of
influence on the field."

Smith: If a graduate student is broadly educated in


a few areas of application of these languages, I think
they're better off. So besides studying the parts of
computer science or computer engineering that are
relevant to his or her field, he should also endeavor to
understand what these languages are used for in a few
other fields. So the graduate student should prepare

S i x t e e n P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s O u r Field

himself or herself to do that by learning about partial


differential equations, or databases, or whatever.

Wii h respecC projr mming


lanjuages, what advice would you
jive somebody aboui
chimerour
field as:
c) an industrial employee

Steel8:
Now we are getting more focused. They
have already decided to enter the field in some sense,
so the best advice there is: Study the past. "Those that
don't study history are condemned to repeat it." Study
the great designs of previous programming languages;
ALGOL 60 is an example. Study SIMULA 67, study
COBOL. Also, go to conferences, get involved with
ACM and IEEE and other organizations that can provide you with the knowledge you need to know. The
important thing is to soak up information. Soak up
knowledge about the things that have been done in
the past, and what went right, and what went wrong
with them.

~3troustrup: Often

Aho:

I would suggest that the} gain knowledge of


the best current practices in the software life cycle
and encourage their company to migrate to those
practices. Even that, I suspect, would have a significant improvement on the quality of their software-if
these best current practices were uniformly used
within their company. I suspect it would also reduce
the cost of their software development.

a graduate student knows

more about less than an undergraduate. Take a chance


or two to look up from your professor's current interest and the latest fad, and look at something completely different. Try helping a biologist, an architect,
a historian, or an accountant. Anyone but a computer
scientist. These people will usually not know the latest fad in programming techniques, but they have real
problems that just might be a good testing ground for
your latest idea or-even better-a source of ideas that
comes from outside the closed world of computer
internals and academic discourse about it.

A [ ] e n : I would say they would have to become very


skilled in one or two of the widely used languages
and the environment in which they exist. I ' m really
not quite sure about that. It depends on what the person's goals are, of course.
D e n n i s : It depends upon the context of your work.
Feppallte:

Think about how to make tools that will

really help as the underlying architecture in language


changes.

Tane11~atl111: Be sure to spend lots of time think-

Goldberg: If

ing, not programming.


W e g r l l a r l : Do something dramatic and new. I think
at the moment that people are just following the direction that was set long ago.

Wulf:

I would, first of all, repeat the advice I gave


to the undergraduate, and add to that that the problems to be solved are large. Sometimes programming
languages may be a way of codifying how to solve
those larger problems, but don't get trapped into
thinking that programming languages, by themselves,
independently, are a research problem. They're only a
means to an end.

our undergraduates and graduates


understood what I said to them, I think they'd make
better industrial employees. The real trick for them is
understanding customer value, and understanding that
when you build a system, you build a system to do
what your customer needs. So there has to be better
traceability, so the function of the system you build
traces back to the requirements of the customer.

Kennedy: Keep

reading and keep learning, because there will be new programming languages
which will supplant old ones, and some of these will
be more powerful.

Liskov: You have to rise above your tools. So you


may be forced to write programs in inadequate tools,
but if you discipline yourself, and think about your
programming methodology and your software engineering practices, you can make do.
Macl~ueen:

For practitioners, I think the main

advice is to develop a more critical attitude toward


programming languages and programming technol-

46

?1.............i:iiiiFiiF...................I ......i i/Yiliilii..............[ - - ~ I I ITI IIIIIIIIIITIII I I I~ I III

I IIIIH

P r o g r a m m i n g Languages, P a s t P r e s e n t and F u t u r e

ogy. There's a lot of marketing hype that one has to


be able to see through if one is going to try to evaluate where one can really achieve payoff of increased
quality and productivity, and leverage one's tools. I
think that for many practitioners their fundamental
tools have stagnated for two decades. So they should
be looking for some kind of breakthrough to a significantly higher level of productivity, delivered by more
modern programming tools, and more modern programming languages in particular.

Sethi:

Well, what we've been doing in an indus-

trial setting, we've been talking a lot with our colleagues that are doing program development and exposing them to how programming languages and
compilers can be used as tools to make the job easier.
So it's not just what programming language are you
using, but for the domain you are dealing with, are
there specific languages that could be designed or
tailored that would dramatically improve productivity?

Smith:

This is an industrial employee who is engaged in the development of compilers, or interpreters, or new languages or whatever? Presumably this
person is educated and able to do some of this, although if they're about to enter the field, they don't
have much experience. There's one piece of advice
everyone should have, I guess, and that is: language
translation in any form is a compromise between the
efficiency of translation and the efficiency of the executed code. So perhaps they should keep that in mind
as they start to develop such things.

Steele:

Buy my books? (laughs) Seriously, I think

the same advice (as a graduate student) although in an


industrial situation it is necessary to be focused even
more narrowly on the specific task at hand. And as
one goes on in life one tends to trade in generality for
specialty. And again, education is key. I find education an important theme throughout my life. True, I
loved being a student, but I find myself constantly, in
the course of my own work for companies, as well as
when I was a professor earlier in my career, that the
key thing was constantly to be looking up new information to find out what other people are doing. Go to
conferences; read. And now I add to that, search the
web. Sometimes I get into AltaVista, pick a keyword
that's related to whatever it is I ' m doing, and do a big
search and see what's out there. Spend some time
skimming.

Stroustrup: That depends.

My basic inclination
is to recommend learning new things, poking into

47

areas not directly related to the work in hand, and


keeping up non-corporate, non-technical contacts.
Unfortunately, in the short run that can, in many
places, be hazardous to someone's survival of the
next re-org, or their standing at the next pay raise, or
the next promotion. On the other hand, it can be more
fun; and on average it works better in the long run
than a firm focus on the current project. However, an
industrial employee is not as free as a grad student to
pursue his or her inclinations without negative effect.
A mortgage, or a couple of kids focus the mind wonderfully on short term goals-often to the expense of
long-term goals.

Tanenbaum:

Be sure to spend lots of time de-

signing.
Wegmarl:

Take what you have learned and adopt it

to the new situations that you find.

Wulf: I don't have an answer difterent from the undergraduate answer.

With respect t o programming


languages, what advice would you
give somebody about to cnter our
field as:
d) a rcscarcher
A h o : Same as graduate student.
A l l e n : I would say this: developing new languages
is notoriously hard, especially to make them popular;
to get them widely used. And people get discouraged
about doing language research in part because of that.
But on the other hand we have recently had some
success with Java, and have had some other recent
successes. The needs are so great right now that we
really should work very hard in these areas. But it has
to be done from a needs point of view, from a "How
is this language going to be used" point of view.
That's really why Java has become so popular. I believe. I know it isn't popular with everybody. But we
are coming to a time for computing on the network,
and this is a language that is probably going to help
us do it. If it passes the test; hopefully it will. A language for our time.
Oelqlqi~: It depends on your research project. If you
are doing research in programming languages, the

Sixteen P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s Our Field

important areas are the open problems that I mentioned earlier. A researcher in programming languages has to know the whole field; so you must be
acquainted with object-oriented languages and functional programming languages; you have to know
their type theory, and so on. If one is not in programming language research, per se, then it depends on the
area of research or the particular project you are involved in.

For researchers I would put the challenge


"How would you use language to dramatically improve productivity?"

F e t P a l a t e : About the same advice as the graduate


student. I ' d also say, think about some of the questions like the question you asked. "How will things
look in the future?" Things like that.

~ t l e : Well, I ' m going to break the rule that I

Sethi:

S m i t h : As far as a researcher is concerned, I think


every researcher in the field ought to consider the
opportunities involved in making parallel programming easier. I don't think we've paid enough attention
to that in the field.

Goldberg: Now

we're into a very strong opinion


of mine. Research is about the right to fail. In computing especially, too many researchers are not permitted to fail; they actually have to create something
that will be patentable and become a company. And
that I think, has stifled any advances in computer science for a very long time. What we ought to say is,
"Fail early, fail often," right? There are actually some
research groups that use some variation of that as a
motto, and I think it's smart. Researchers don't create
products. That's a mistake that people make. If people are doing research because the research is supposed to create products, I think there's some confusion on skills, goals, attitudes, the whole bit. When I
was doing research, I didn't view myself as creating a
product. When I actually started getting into product
mode, I know that I had stopped doing research and
had started doing advanced development. Then that
fell into the category of industrial developer.
K S n t l e g t y : Since my area is programming language
implementation, I will restrict my comments to that.
The important issues in programming language implementation have to do with providing adequate efficiency for very high level abstraction on a broad variety of machine configurations, and there's plenty of
work to be done in that arena.

Liskov:

Make sure you are solving a real problem

and not just playing games.


~ N I T I I T l e t : That's fairly easy. The answer is that
you can't give that person advice because it depends
too much on the research that that person is doing.
The nature of the research dictates the programming
language. That's why we have so many!

48

stated previously. I think I explained what I did.


Wanting to become a researcher, I spent a lot of time
in the library and read every back issue of Communications of the ACM, and made sure I read the abstract
of every article, if not the article. Now that was a lot
easier to do 25 years ago! I also read every article that
Martin Gardner had written in Scientific American:
which, again, was a lot easier back then, because he
hadn't finished writing them all. It's hard to give advice that is going to be helpful to every person out
there. Learn a lot, explore what interests you. Don't
turn up your nose at any particular subject area because you're sure it's irrelevant. The only thing I ' m
sure of is that things may be more or less relevant, but
it has hardly ever turned out that something is totally
irrelevant. And I ' m constantly amazed at what subject
areas I draw upon these days in my research that I had
learned 20 years ago and never thought would help
me out now, but I had studied them then just for fun.
StPou~tru]9:

Same advice as to a graduate stu-

dent. Maybe even more strongly. However, the pressure of a family or the pressure to succeed, as measured by external forces-publish or perish, get tenure
or leave town-can be quite deadening. It is easy to
recommend taking a risk, but that's hard advice to
take. Also, do try to write papers that are comprehensible to more than a small group of professors and
graduate students.
T a l a e l a b a t l i T l : Be sure to write tots of papers.

Wglllan: Do something interesting and new, but


understand the implications of getting what you have
done into real practice. Understand the tactical plans
to get your ideas to make a real impact on society.

Wulf:

I would say to any researcher, be sure the


problems you are addressing are problems somebody
cares about the solution to; somebody other than people in your own niche.

.....

Programming Languages, Past Present and Future

Do you have any comments you


would like to add regarding the
questions we havejust gone over,
or your insight into the
programming languages field's
past, present or future?
A h o : I found these questions somewhat interesting
fl'om the perspective of someone who teaches software engineering. If one studies a natural language,
such as English or French, one can study the syntax
and semantics of the language, but on the other hand
the more interesting arena is the literature and practice of that language in the context of society. And I
think that maybe what we need to do in the programming languages arena is to look at the practice of using a language in the context of the software life cycle. And there I think we will get a different perspective on the role of programming languages in the development of software.
One of the questions that I might have put on this list
is, "Where is the literature of great programs? And
what language are those programs written in?" Don
Knuth and I have talked about this for a long period
of time. When I last asked Don what he felt his most
significant paper was, he picked "Literate Programming". And if you've ever had to talk about programs, or software in general, you realize what a difficult task this is. What Knuth was interested in-he
developed this typesetting system called TeX and he
wanted to be able to document the software, the systems, and so on. For many years he's been talking
about how you write about algorithms; how you write
about programs. He wanted to be able develop some
tools with which one could talk about systems, and
treat these systems as case studies, things that students and other professionals would study for their
intrinsic merit. So he spent a great deal of effort on
creating quality literature about the TeX system. And
this is the rubric of literate programming. How do you
write and how do you produce software that can then
serve as a paragon of clarity or style, or whatever?
Software to be emulated. And I don't think there has
been enough stress on producing model programs that
can then be studied for a variety of reasons; used for
case studies of very well designed systems.
The problem with many real life systems is that if
they are successful, they evolve; and even if at one
time they were clean and elegant in their architecture,

49

through the passage of time they become more and


more garish; that original clarity and elegance of the
architecture gets lost. That's why systems become
unmaintainable, if this is continued for a very long
period of time. So these are some of the very important pragmatic issues that can come up in the evolution of real-world systems. We haven't found good
ways of maintaining that conceptual integrity or elegance that systems once might have had, assuming
that they did have them in the beginning. And many
people who talk about software or programming usually focus on relatively small program fragments,
whereas if you are talking about a hundred-thousand
or million line system, that's a completely different
experience, and what we'd like to be able to do is see
the architecture of the town, as well as the architecture of the buildings, and there often isn't a graceful
transition in going from the system level to the module level.
When I was at Bell Corp, I visited many of the nations leading software vendors, and I asked them one
question, and that's, "What's your company's Software Quality Plan?" And that sort of inunediately
separated the companies into three groups. One group
of companies said, "Software what?" Another group
said, "Oh, we started testing our programs last week."
And a much smaller group said, "Great question! We
started working this problem a decade ago, and this is
where we're at in the process." And it really showed
in terms of the quality of the products they put on the
marketplace; the impact of having a good intelligent
quality plan.
The answers you get to these questions are going to
be very heavily biased by the people you talk to. But
when I speak to people in the programming languages
area, there is often a sharp dichotomy between the
practicing programmers, who have to build a product
that works, and the researchers and academics who
have somewhat different perspectives on programruing languages. Have you noticed this in your interviews? That if you talk to the folks who have to produce working artifacts, do they have radically different views from the researchers or academics?
A [ ] e n : One other thing I'd like to mention, to emphasize, is that we really have to understand, deeply,
what the issues are in language and compiler development. In particular, applications need to be maintained; very often programs have very very long lifetimes. Debugging and testing is an absolutely critical
issue; more money is spent on that than on program
development, across the industry. And somehow I
feel that we in compilers and languages have focused
too much inwardly at how do we ~mprove what we

Sixteen Prominent Computer Scientists

Assess

have without understanding the context, and the major software problems that are surrounding our work.
What I ' m saying is that we may not be solving the
right problems.
D e n n i s : Thanks for giving me the opportunity to
stand on my soap box and say a few things, particularly about the problem of encompassing concurrency
and supporting parallel computers as being the main
problem area in programming languages.
I would like to point out that there is an important
computer science problem that is larger in scope than
programming languages itself: making a good accommodation between computer architecture and
programming languages. At this time, programming
languages are still generally used only to describe
what goes on in the main memory of the computer;
programming languages do not provide support for
making use of operating system facilities, other than
library calls. The operating system facilities make up
an enormous part of what a current contemporary
application is up to. File system use, concurrency
primitives, message passing, synchronization, protection concepts-these are all utilized through the operating system and are not normally represented in programming languages. The related problem in computer architecture is the fact that the addressing
schemes built into the hardware are not universal; this
makes it very difficult for a compiler or programming
language designer to address these issues and expand
the scope of programming languages to what it should
be. A programming language should encompass all
the facilities that a programmer needs to express an
application, and at the present time they do not do
this, because they do not cover the facilities provided
by the operating system. A major revolution is
needed, but I don't see how the industry will move in
the needed direction. The revolution concerns
changing the addressing schemes of machines to a
more universal kind of addressing of data, an addressing scheme that will permit the sharing of arbitrary data between users and between application
programs.
F e i " v a n t e : It's really been an amazing 50 years,
and what seems to be happening now is that the rate
of change is just accelerating so quickly. I think that's
going to have a big effect on the field, too. It's going
to really make a change in ways that we can't envision right now, perhaps because the emphasis on
change and integrating is going to have an effect on
business as usual. W e ' v e gotten to a point where the
change is so rapid that it's got to be a primary consideration for everybody.

50

O u r Field

Goldberg: The thing that has been absolutely true


about programming languages is that they've allowed
us to use computers for a wider variety of application
areas and allowed us to manipulate graphics, and literally be able to see more, which is really exciting.
W e ' v e really changed the landscape of how people
work and live, and how they are taken care of in the
health services, as well as legal and other areas. And
our technology today is going to change the whole
educational area; and our people need to understand
that, no matter where they're at. Whether they are in
school, or whether they're in industry, or in research;
they have a very large responsibility to think through
what they build and what it means to that changing
landscape, and to be a little more purposeful, or at
least more thoughtful about it.

Kennedy: I

would re-emphasize something I already said. Compilers are a way to make programming languages effective and usable. There's long
been a sort of golden dream of programming languages to provide a very high-level interface to really
increase productivity for the individual programmer.
I don't think we should give up on achieving that
goal. It's going to come because of progress in the
architecture of fast machines and all we have learned
about how to analyze and improve efficiency for programs as a whole.
In the near term, two things may happen: we're going
to be able to implement very high level languages,
even domain specific languages, and properlydesigned programming languages may make the individual user of personal computers into a potent programmer. Then we will really begin to see increases
in software productivity.
M a c l ~ u 8 8 1 1 : I think programming languages is a
fascinating field, because it's the first time that human
beings have set out to consciously create artificial
languages, with the exception of phonetics for a few
languages like Esperanto. The creation of artificial
languages is an exceedingly interesting pursuit. And
the development of programming language is still in
its infancy, although we have a tong heritage from
mathematics, in the formation of notations and formal
ways of description and reasoning, and so on. But we
have a lot of progress to look forward to; a lot of new
understanding to achieve. I think that's the real prospect of the field.
~ t 1 " l l T l e t : Yes. There is another point, that historically I consider important, because a great many
of my computer science friends and colleagues don't

Programming Languages, P a s t P r e s e n t a n d F u t u r e

know about this. Over the past 40 years or so, roughly


speaking, over half of the programming languages
that have been created have been created for very,
very narrow applications. For example; not only are
there languages for civil engineering, but there are
languages for subsets of civil engineering. There is a
language called COGO which is for coordinate geometry. There is a language called ROADS that is
literally a language for helping people design roads.
There are languages for statistics; there are languages
for pharmacology students. Most of the computer
science people are not aware of these languages, and
when they become aware of them, they tend to turn
their noses up. These are very narrow languages created by people who are interested in a narrow field,
and want a language to help them. And, for a great
many years while I was tracking this-I'm not now,
but I have no reason to think that it has changedthese represented over half the languages that were in
use at any one time.
Very narrow languages, generally speaking, will not
be taught in computer science courses, and that's reasonable. But it would be nice if students were exposed to the existence of these types of narrow languages. I think that a student is best served by learning a half dozen programming languages. Even the
differences in LISP and Prolog, both of which are
effective for artificial intelligence. Or the varying
kinds of object-oriented languages. I think that a person really ought to view programming language
studies as a sort of buffet; take a little of everything
so that they get exposed to a great many languages.
The chances are that these will not be the very narrow
kinds that I am talking about, because they will only
be useful to the people working in those disciplines.
Unless a person is going to be a Civil Engineer, for
example, he may never see these types of languages.
But the existence of these narrow languages is natural.
There are several reasons why there are so many programming languages, but there are two major ones.
The first major reason is functionality. That is to say,
the language is best suited for certain types of applications. I wouldn't want to write a payroll program in
LISP, which is a list oriented language, suitable for
artificial intelligence. Nor would the artificial intelligence people want to write their programs in
C O B O L So there's a different functionality that is
needed for the different application areas. And the
broader the application area, the broader and more
powerful the language has to be. That's why Ada is of
necessity so large and powerful, because it had to
cope with all of the different applications that exist in
the Department of Defense.
51

The second major reason that there are so many programming languages is personal preference and style.
Programming languages have very different styles
depending on the opinions and views of the people
who are developing them, or causing them to be developed. And therefore, when you get down to these
very narrow application areas, these specialized languages exist.
Let me give you an interesting example that might
help you understand why there are these specialized
languages: for many years I gave talks on programruing languages. And I always gave examples of a
couple of these very narrow languages. One of the
examples I used was COGO. After one of my lectures, somebody came up to me and said that COGO
was very useful to him because he was a civil engineer, and the notation in COGO was just the type of
notation that Civil Engineers use in their written discourse. This was surprising to me, because I had no
idea that the notation in COGO was based on Civil
Engineer's written discourse or oral discussions, because I had never talked to a Civil Engineer or participated in their discussions. But that is why there are
so many of these languages in these specialized application areas, because they use notation and words and
phrases and approaches that are relevant to the application area. Languages like COBOL, and FORTRAN
and LISP are dealing with very much broader areas.
Even though there are substantive differences in these
broad languages, and they have different capabilities,
nevertheless they are intended for much wider and
broader applications.
~ t t l i t h : I think we suffer a little bit from a macho
sub-community that thinks that programming in the
large is just as easy as programming in the small, and
we don't need programming languages that are more
safe than what we have. I think that's wrong. We have
a major software development problem simply because the computer architects and the compiler writers have done what was expedient rather than doing
what will allow and enhance the ability to engineer
large systems, i.e., large collections of code. We need
to pay more attention to the things in our programming languages and their implementations that enhance our ability to write large-scale software systems
and know that they work.
S t e e l e : We've progressed far enough in programming language design that we have a number of very
good tools at our disposal now. And its really hard to
get a new prograrmning language accepted purely on
its technical merit as a programming language because now it's really hard to be enough better than

Sixteen Prominent Computer Scientists

A s s e s s O u r Field

what we've got now to be worth people's trouble to


switch. So I've observed over a period of time, five,
ten, 15 years, most new programming languages have
been accepted because of some additional force that
has made them attractive. People have picked up on
Java because it is not just a new programming language but provides additional facilities to help solve a
particular problem at hand having to do with network
programming; the necessary security procedures. C
became wide spread not because it was superior to
FORTRAN or PL/I or COBOL, but because it was
the vehicle for a relatively portable operating system
called UNIX. That was really important to people.
Ada succeeded because it had a certain amount of
political and social pressure behind it. So I think that
a switch to new programming languages requires the
appearance of a new application area that makes it
worth while to do a reset and start over.
Second, once you've done that reset, it's probably
important to start designing a language to be initially
small, and then grow your language as you grow your
user community, because it's too hard to swallow a
big language all at once. So, for example, I don't
think that Java could have caught on if it had been as
comprehensive-read "as complicated"-as PL/I or
Ada. Now in fact, Java has ended up looking a lot
like a subset of Ada. It's not that Ada had bad ideas,
its just that it had a lot of other stuff too. And Java
may grow a lot over the course of time, which adds a
lot, because its user community can grow along with
it, both in numbers and educationally speaking.
Eventually we will reach the time when Java-either
the language itself or its surrounding libraries, programming environment, conventions, social culture
and so forth-have grown so large that it will be difficult for a new young person to approach it and be
able to grasp all of it and be effective with it. And
there will probably come another revolution where
another small language comes along and says "This
Java thing has become big and overgrown, and no
one can deal with it, and why don't we start over
again and do something small and simple and beautiful." And it will be slightly different in some way
that .... I don't know what will be important in ten or
15, or 20 years from now when this happens. I guess
the point of this ramble is that I now believe in programming language design as a process that is cooperative with the evolution and education of a group of
users, rather than the idea that you will design this
fixed beautiful diamond that will stay the same for all
time and will be the right thing for everyone to use.
~ t P o l . l s t P t l p : Even for an interview of a language
designer for a journal dedicated to programming lan-

52

guages, these questions are too focused on programming languages and language-implementation techniques. What really matters is programming and understanding of problem areas. The language used to
express the ideas is secondary. Wherever possible try
to look at the tasks faced by programmers delivering
systems to non-programmers. This is the area where
the hardest problems are, so this is where the most
interesting solutions are likely to come from.
m a r l e M l ~ a u m : The golden rule of programming
language design is (or should be): KISS-Keep It
Simple, Stupid.

Wulf: It's been one hell

of a ride! Being involved in

computer science through this period has been an


extraordinary experience. And I expect it's going to
continue to be an extraordinary experience. The impact that information technology is having on society
is simply amazing. So it's fun being involved in this
field. I can get all worked up on that topic!

Programming

Languages, Past

Present

,.and F u t u r e

A B r i e f Look a t O u r P a r t i c i p a n t s
Alfred V.

Aho

became professor and chair

of the Computer Science Department at Columbia


University in 1995. From 1991 to 1995 he was General Manager of the Information Sciences and Technologies Research Laboratory at Bellcore in Morristown, New Jersey. The work of this laboratory was
directed at advancing the national information networking infrastructure. From 1987 to 1991 he was
Director of the Computing Science Research Center
at AT&T Bell Laboratories, Murray Hill, New Jersey.
Inventions of this center include the UNIX operating
system and the C and C++ programming languages.
Dr. Aho received a B.A.Sc. in Engineering Physics
from the University of Toronto and a Ph.D. in Electrical Engineering (Computer Science) from Princeton University. Upon graduating from Princeton, Dr.
Aho joined Bell Laboratories in 1967 as a Member of
Technical Staff in the Computing Techniques Research Department, and in 1980, was appointed Head
of the Computing Principles Research Department.
He has also been an adjunct professor of Computer
Science at Stanford University and at the Stevens
Institute of Technology.
Dr. Aho's personal research is centered on multimedia information systems, database systems and query
languages, programming languages and their compilers, algorithms and the theory of computing. He has
published more than sixty technical papers in these
areas and ten textbooks that are widely used worldwide in computer science research and education. He
is a coinventor of the AWK programming language
and other UNIX system tools.
Dr. Aho has received numerous awards including
Fellow of the American Association of the Advancement of Science, Fellow of the ACM, Fellow of Bell
Laboratories, and Fellow of the IEEE. He has received honorary doctorates from the University of
Helsinki and the University of Waterloo for his contributions to computer science research. He has been
a Distinguished Lecturer at many of the world's
leading universities.
Dr. Aho is active on a number of national and international advisory boards and committees. He has
served as Chairman of the Advisory Committee for
the Computer and Information Sciences and Engineering Directorate of the National Science Founda-

53

tion. He has also been Chairman of ACM's Special


Interest Group on Automata and Computability Theory and a member of the Computer Science and Telecommunications Board of the National Research
Council.

Fran Allen

is an IBM Fellow at IBM's T.J.

Watson Research Laboratory and was the t995 President of the IBM Academy of Technology. Ms. Allen
specializes in compilers, compiler optimization, programming languages, and parallelism. Her early compiler work culminated in algorithms and technologies
that are the basis for the theory of program optimization and are widely used throughout the industry.
More recently she has led compiler and environment
projects resulting in committed products for analyzing and transforming programs, particularly for paralM systems.
Fran is a member of the National Academy of Engineering, a fellow of the IEEE, the Association for
Computing Machinery (ACM) and the American
Academy of Arts and Sciences. She is on the Computer Science and Telecommunications Board, the
Computer Research Associates (CRA) board, and
NSF's CISE Advisory Board. She holds an honorary
D.Sc. from the Univ. of Alberta.
Jack

B.

Dennis

has been an active faculty

member in the MIT Department of Electrical Engineering and Computer Science since 1958, and is now
Professor Emeritus of Computer Science and Engineering.
In 1963 Prof. Dennis formed the Computation Structures Group (CSG) in the MIT Laboratory for Computer Science and led the formulation of the dataflow
model of computation that has influenced computer
architecture projects throughout the world.
The CSG, in cooperation with the Lawrence Livermore National Laboratory (LLNL) created the Val
programming language in 1978. Val is the forerunner
of Sisal, the functional programming language that
has had the greatest influence on scientific computing.
In addition, the group has contributed in theoretical
areas ranging from the theory of Petri nets to the se-

S i x t e e n F ' r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s O u r Field

mantics of programming languages and formal operational models for computer systems.

Decision Frameworks for Project Management with

Professor Dennis has supervised more than 25 doctoral research students, has published several books
and numerous technical papers, and provides consulting services to the computer industry.

She was recipient of the ACM Systems Software


Award in 1987 along with Dan Ingalls and Alan Kay,
PC Magazine's 1990 Lifetime Achievement Award
for her significant contributions to the personal computer industry, is a Fellow of the ACM, and was honored in 1995 with the Reed College Howard VoIlum
Award for contributions to science and technology.
She is currently a member of the scientific advisory
board of the German National Research Centers, is a
director of the San Francisco Exploratorium, and is a
member of the Board of Directors of two private
technology companies.

In 1969 Prof. Dennis was elected Fellow of the IEEE


for his contributions to the paging and segmentation
schemes used to solve memory management problems
in multi-user computer systems. In 1984 Prof. Dennis
was presented the ACM/IEEE Eckert-Mauchly award
for his development of and 'contributions to dataflow
computer architecture. He was inducted as a Fellow
of the Association for Computing Machinery in 1994.

J e a n n e Ferrant, e

received her Ph.D. in

mathematics from MIT in 1974. She was a Research


Staff" Member at IBM T.J. Watson Research Center
from 1978 to 1994, and currently is Professor of
Computer Science at the University of California, San
Diego. Her work has included the development of
intermediate representations for optimizing and parallelizing compilers, most notably the Program Dependence Graph and Static Single Assignment form.
Her interests also include optimizing for parallelism
and memory hierarchy, and her current work focuses
on automating hierarchical tiling.

Adele

Goldberg is a founder of Neomet-

ron, Inc., a Texas-based company working towards


Intranet support for self-managed teams. Previously,
she served as Chairman of the Board and a founder of
ParcPlace-Digitalk, Inc. until April, 1996. ParcPlace
created application development environments based
on object-oriented technology and sold to corporate
programmers. Prior to the creation of ParcPlace,
Adele received a Ph.D. in Information Science from
the University of Chicago and spent 14 years as researcher and laboratory manager of Xerox Palo Alto
Research Center. From 1984-1986, Adele served as
president of the ACM. Solely and with others, Adele
wrote the definitive books on the Smalltalk-80 programming system and has authored numerous papers
on project management and analysis methodology
using object-oriented technology. Dr. Goldberg edited The History of Personal Workstations, published
jointly by the ACM and Addison-Wesley in 1988 as
part of the ACM Press Book Series on the History of
Computing which she organized, and co-edited Visual
Object-Oriented Programming with Margaret Burnett
and Ted Lewis. In 1995, a new book on software engineering appeared entitled Succeeding With Objects:

Kenneth S. Rubin.

Ken

Kennel~iy received a B.A.

in mathe-

matics from Rice University in 1967. He pursued


graduate studies at New York University, earning an
M.S. in mathematics and a Ph.D. in computer science.
He joined the faculty at Rice University in 1971 to
join the faculty of the Mathematical Sciences Department, rising to the rank of professor in 1980. He
founded the Rice Computer Science Department in
1984 and served as its chair until 1988. He was
named the Noah Harding Professor of Computer Science in 1985.
Throughout his career, Ken Kennedy has conducted
research on the optimization of code compiled from
high level languages, especially FORTRAN. From
1970 to 1978, he worked on methods for global data
flow analysis in support of code optimizations, contributing widely-used approaches for live analysis and
reduction in strength (with Cocke and Allen). He also
invented the "node listing" iterative method for data
flow analysis, the only iterative approach to achieve
near-linear worst-case running times.
In 1978 and 1979, he spent a year at IBM Research
where he began work on automatic vectorization.
This research led to the development of PFC, one of
the earliest and most successful automatic vectorization systems for FORTRAN.
In the early 1980's Kennedy began to extend his
methods for vectorization to automatic parallelization. He discovered that the key impediment to compiler parallelization was the limitation of global
analysis techniques to single procedures. To overcome this limitation, he began work on the Rn progranuning environment, which was designed to support whole-program analysis in an environment that
permitted separate compilation. The work on Rn led
to the development of a number of new algorithms for
interprocedural data flow analysis, including a linear-

Programming Languages, Past Present and Future

time algorithm for flow-insensitive problems, which


was developed with Keith Cooper.
In 1988, Kennedy led a group of researchers from
Caltech, Rice, Argonne National Laboratory, and Los
Alamos National Laboratory in a proposal to the National Science Foundation to create the Center for
Research on Parallel Computation (CRPC), one of the
first NSF Science and Technology Centers. He has
served as director of the CRPC since its inception in
1989.
In 1990, Kennedy was elected to the National Academy of Engineering. Currently, he is a Fellow of the
American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers, and the Association for Computing Machinery.
B~

r ~ a ra

Liskov

majored in mathematics

at the University of California, Berkeley. Unable to


find an interesting job in the mathematics field, she
took a job as a programmer and got her introduction
to computer science in 1961.
She went on to do her graduate work at Stanford University, and was a member of the first group of students to take the computer science qualifying examination. She did thesis work on artificial intelligence
with John McCarthy. Her Ph.D. thesis was on a program to play chess endgames.
After graduate school, Dr. Liskov went back to work
at Mitre Corporation, where she had worked before
going to graduate school. She worked at Mitre for
four years, during that time switching her area of interest from AI to systems.
Dr. Liskov left Mitre to join the faculty at MIT,
where she is currently the Ford Professor of Software
Science and Engineering. Her teaching and research
interests include programming languages, programming methodology, distributed computing, and parallel computing. She is a member of the ACM, the
IEEE, the National Academy of Engineering, and a
fellow of the American Academy of Arts and Sciences.

David MacQueen

involved in the design and implementation of the


Hope and Standard ML programming languages and
is one of the principle authors of the Standard ML of
New Jersey programming system. He has been active
in the programming language research community
and is general chair of POPL 98. He received his
Ph.D. in Mathematics from MIT in 1972.
asar]

~lTlmst

received her BA from

Mount Holyoke College, and then attended the University of Illinois, where she received her MA in
mathematics. She received an honorary Sc.D. from
Mount Holyoke in 1978. Dr. Sammet has been active
in computing since 1955, when she was the first
group leader for programmers in the engineering organization of the Sperry Gyroscope Company. In
1956 and 1957 she taught some of the earliest computer classes ever given for academic credit, at Adelphi College on Long Island. During her second year
of teaching, she used the just-released FORTRAN
language in her classes.
Dr. Sammet joined Sylvania Electric Products in
1958, as Section Head for MOBIDIC (a large computer of that time) programming. She was a key
member of the COBOL committee, starting in 1959,
until she joined IBM in 1961.
At IBM, Dr. Sammet was charged with organizing
and managing the Boston Advanced Programming
Department. One of the key efforts of that department
was the development of FORMAC, the first programming language for symbolic mathematics.
Dr. Sammet has given many lectures and written numerous articles on symbolic computation, programming languages, and the history of both. In 1969 she
published the highly regarded book Programming
Languages: History and Fundamentals.
Dr. Sammet has been active in professional societies,
serving at various times as ACM SIGPLAN Chair,
ACM Vice President, and ACM President. She has
been Editor-in-Chief of A CM Computing Reviews and
the ACM Guide to Computing Literature, and was the
Program Chair for both the first and second ACM
History of Programming Languages Conferences.

is head of the Soft-

ware Principles Department in the Computing Sciences Research Center of Bell Labs, Lucent Technologies. He joined Bell Labs in 1981 after five years
as a research fellow at the University of Edinburgh
and a year at the USC Information Sciences Institute.
His research concerns the design, semantics, and implementation of functional programming languages,
particularly their type and module systems. He was

55

Some of Dr. Sammet's honors and awards include


membership in the National Academy of Engineering,
the ACM Distinguished Service Award, and the
Augusta Add Lovelace Award from the Association
for Women in Computing.

S i x t e e n P r o m i n e n t C o m p u t e r S c i e n t i s t s A s s e s s O u r Field

vi Sethi

is Research Vice President for

Computing and Mathematical Sciences at Bell Labs,


Lucent Technologies. He joined Bell Labs in 1976.
Ravi got hooked on computers and programming as a
freshman at the Indian Institute of Technology, Kanpur, in 1963. He got his B. Tech. in Mechanical Engineering in 1968 and Ph. D. from Princeton University in 1973.
His technical contributions to compilers and programming languages earned him an Association for
Computing Machinery Fell award in 1996 and a Bell
Labs Distinguished Technical Staff award in 1984.
He is a co-author of the textbook, Compilers: Principles, Techniques, and Tools (1986), known as the
"dragon" book because its cover has a dragon representing the complexity of compiler construction (coauthored with A1 Aho and Jeff Ullman). He is the
author of the "teddy-bear" textbook, Programming
Languages: Concepts and Constructs (1989); a substantially revised second edition was published in
1996. He has written over fifty papers on compilers,
code generation, algorithm analysis, and the potential
applications of the semantics of programming languages.
Ravi was on the faculty of the Computer Science Department at The Pennsylvania State University from
1972 to 1976. He was a Professor at the University of
Arizona from 1979 to 1980. While at Bell Labs, he
taught courses in 1983 at Princeton University, which
led to the book on compilers, and in 1986 at Rutgers,
which led to the book on programming languages. He
serves on advisory committees for the University of
Puerto Rico, Penn State, and Princeton University.
Ravi is a member of ACM and has been program
chair and conference chair for conferences organized
by SIGPLAN and SIGACT. He has served on Executive Committees for both SIGs.

Durton J. Smith

zations in 1994. Smith received his SM, EE and Sc.D.


degrees from MIT.
GUy

L. S'~eele,

Jr,,

received a degree in

applied mathematics from Harvard College, and went


on to receive his Masters and Ph.D. from MIT. He
has taught computer science at Carnegie-Mellon University, and has been a member of the technical staff
at Tartan Laboratories in Pittsburgh, Pennsylvania,
and a Senior Scientist at Thinking Machines Corporation. He is currently a Distinguished Engineer at
Sun Microsystems Laboratories.
Dr. Steele has written several books and numerous
papers on programming languages, including books
on LISP and C, and The Hacker's Dictionary.
Among other awards, he has received the ACM's
Grace Murray Hopper Award, and led a team that
was awarded the 1990 Gordon Bell Prize honorable
mention for achieving the fastest speed to that date
for a production application.
He has served on accredited standards committees for
C, FORTRAN, and LISP, chairing the committee on
LISP, and was on the IEEE committee that produced
the IEEE standard for Scheme. He has also served on,
and chaired, several ACM awards committees.

Bjarne Stroustrup

did his under-

graduate work at the University of Aarhus, Denmark,


and received his Ph.D. in Computer Science from
Cambridge University, England, in 1979. Dr. Stroustrup is the designer and original implementer of C++,
and is the author of The C+q- Programming Language and The Design and Evolution of C++. His
research interests include distributed systems, operating systems, simulation, design, and programming.
Dr. Stroustrup is a Department Head at AT&T Labs
Research and an AT&T Bell Laboratories Fellow. He
is also an ACM Fellow, and a past recipient of the
ACM Grace Murray Hopper award.

is the Chairman and

Chief Scientist of Tera Computer Systems. Smith is a


recognized authority on high-performance computer
architecture and programming languages for parallel
computers, and is the principal architect of the MTA
system. Prior to co-founding Tera, Smith was a Fellow of the Supercomputing Research Center, a division of the Institute for Defense Analyses. He was
honored in 1990 with the Eckert-Mauchly Award
given jointly by the Institute for Electrical and Electronic Engineers and the Association for Computing
Machinery, and was elected a Fellow of both organi-

56

Andrew S. Tanenbaum

attended

college at MIT, where he received his undergraduate


degree, and went on to receive his Ph.D. from the
University of California at Berkeley. He is currently
Professor of Computer Science at Vrije University in
the Netherlands, teaching courses on computer architecture, networks, and operating systems. He is also
the Dean of the Advanced School for Computing and
Imaging, and a member of the governing board of the
Computer Science Department. Dr. Tanenbaum is
also active in research at VU.

Programming Languages, Past Present and Future

Dr. Tanenbaum has written many books on the computer science field, including a 486-page MINIX
manual describing the operating system he wrote
himself.

emy of Engineering, and a fellow of ACM, IEEE,


AAAS, and the American Academy of Arts and Sciences. He is the author or co-author of three books
and over 40 papers.

Besides MINIX, Dr. Tanenbaum has also written a


compiler-writing system called the Amsterdam Compile Kit (ACK) that has been used at hundreds of universities and companies all over the world for producing compilers for half a dozen languages and over
a dozen machines. He is also in charge of the Amoeba
project, which has produced a microkernel-based
distributed operating system that is freely available to
universities via the Web.

Professor Wulf is on leave for the 96-97 academic


year to serve as President of the National Academy of
Engineering.

Among other honors, Dr. Tanenbaum is a Fellow of


the ACM, a Senior Member of the IEEE, and a member of the Royal Dutch Academy of Arts and Sciences. He also won the 1994 ACM Karl V. Karlstrom
"Outstanding Educator Award," and the 1997 ACM
CSE "Outstanding Contributions to Computer Science Education Award."

Mark

Wegman received

his doctorate

from Berkeley, and has worked for IBM since 1975.


He has been elected a member of IBM's Academy of
Science and Technology and a fellow of the ACM.
Dr. Wegman has worked on mathematical algorithms,
but he has principally focused on the issues of how to
make programmers more productive. Some of the
developments that he co-created or invented with
others include: one of the first integrated programming environments; several of the defining papers on
Static Single Assignment and the compiler optimizations that it enables; universal hash functions; a
widely used data compression algorithm; and a linear
algorithm for unification.
Currently Dr. Wegman is working on building programming environments for designing enterprise applications.

William

Wulf

received a BS and an MS

from the University of Illinois. In 1968 he was


awarded the first Ph.D. in Computer Science from the
University of Virginia. He then joined CarnegieMellon University as Assistant Professor of Computer
Science, becoming Associate Professor in 1973 and
Professor in 1975. In 1981 he left Carnegie-Mellon to
found and be chairman of Tartan Laboratories until
1988, when he became Assistant Director of the National Science Foundation. In 1990 he returned to the
University of Virginia as AT&T Professor. He has
directed over 25 Ph.D. theses at Carnegie-Mellon and
Virginia. Dr. Wulf is a member of the National Acad-

57

Dr. Wulf's research interests revolve around the


hardware/software interface-and thus span programruing systems and computer architecture.
Some of Dr. Wulf's earlier research activities include:
the design of Bliss, a systems-implementation language adopted by DEC; the Bliss/ll compiler, an
early and effective optimizing compiler; architectural
design of the DEC PDP- 11, a highly successful minicomputer; the design and construction of C.mmp, a
16 processor multiprocessor; design and construction
of Hydra, one of the first operating systems to explore
capability-based protection; the development of the
PQCC, a technology for the automatic construction of
optimizing compilers, and the design of WM, a novel
pipelined processor that, for comparable gate counts
and area, achieves four to six times the performance
of contemporary designs.
Professor Wulf's recent research has been the design
of scaleable high performance memory systems,
computer security, and hardware-software co-design.
The author, Peter Trott, is a Senior Technical Writer for
nCUBE, manufacturer of the MediaCUBE family of
scaleable media servers.

S-ar putea să vă placă și