Sunteți pe pagina 1din 172

Visin tecnolgica siglo XXI

Revista Time

En el artculo la revista Time presenta 83 visiones para el siglo XXI. Las visiones estn
clasifcadas en cuatro categoras: Ciencia (17), Convivencia (13), Tecnologa (17), Trabajo
& Globalizacin (16), y Salud & Medio Ambiente (20). Recomendamos al profesor
distribuir entre sus alumnos el estudio de cada una de las visiones para su exposicin en el
aula. El texto completo est disponible para los alumnos en el sitio www.time.com/v21. A
continuacin el profesor encontrar la pregunt a principal de cada una de las visiones siglo
XXI y una respuesta general, adems, vinculado a cada pregunta el texto completo del
artculo:

I. Ciencia
1. Viajaremos hacia las estrellas?.
La respuesta es no todava durante el siglo XXI, probablemente dentro de 500 aos exista
la nave espacial que permitir viajar por nuestro sistema solar.

2. Clonaremos un dinosaurio?
La respuesta tambin es NO, la ciencia no avanzar lo sufciente como para clonar un
individuo con 200 aos de antigedad, pero no co mo para clonar a otro con ms de 65 mil
aos de antigedad.

3. Golpear la tierra algn asteroide mortal?
Eventualmente SI, pero parece no haber preocupacin, los cientfcos ya estudian el espacio
y preparan la defensa de la tierra.

4. Descubrir la mente cmo es que opera el cerebro?
No tenemos una absoluta idea y ser desconoc ido todava por mucho tiempo cmo es que
opera nuestro cerebro, sus lmites son desc onocidos o tal vez no existan. Solo podemos
esperar que mientras los cientfcos avanzan en el entendimiento del cerebro, este no se
canse, empaque todo y nos abandone en la absoluta ignorancia.

5. Nos mantendremos evolucionando?
Parece ser que los humanos nos mantendremos iguales a como lucimos hoy, asumiendo
tambin que nos encontramos alejados de los desastres nucleares o biolgicos.

6. Viajaremos en el tiempo?
Eindtein probo que tericamente podemos mover nos hacia delante en el tiempo viajando a
la velocidad de la luz, pero moverse hacia atrs en el tiempo es un poco ms complejo. Y
mientras esos hechos frustran a los cientfc os, tambin los reconfortan, en realidad le
gustara que otras eras estuvieran disponibl es?, cualquiera que haya visto la pelcula
regreso al futuro conoce la respuesta.

7. Viviremos en Marte?
Por lo pronto NO, sin embargo los cientficos insisten en que de la atmsfera de Marte se
puede obtener el agua y oxigeno necesarios para asegurar la subsistencia en el planeta rojo.

8. Conviviremos con extraterrestres?
Probablemente no en el formato en que propone Steven Spielberg, pero si hay
oportunidades de tener contacto con vida extra-terrestre, la probabilidad de un encuentro se
pude dar en cualquier lugar del universo, pero algunos estn convencidos de que una
maana recibirn una llamada telefnica de un ser extra-terrestre.

9. Se inventar la mquina con movimiento perpetuo?
Siempre habr alguien que lo siga intentando, A pesar de la imposibilidad terica, los
inventores seguir buscando construir una mquina que se mueva permanentemente sin la
influencia de fuerza externa.

10. Se salvar California
Los sismlogos predicen un terremoto fatal que destruir el puente Golden State en los
prximos 30 aos. Por ello hoy los cientficos estudian como disminuir la presin de la
falla de San Andrs, una alternativa parece ser bombear agua para provocar pequeos
movimientos y evitar uno gigantesco.

11. Tendremos una teora final del todo?
Los fsicos tienen ya una frmula que describe las fuerzas en nuestro mundo. Los
cientficos, sin embargo, vislumbran el da en que una teora sea tan obvia que nos permita
a todos entender las propiedad fsicas del universo, entonces nos podremos explicar cmo
es que todo empez

12. Descubriremos algn otro universo?
Los cientficos continan explorando y aseguran que hay realidades separadas de nuestro
mundo, pero no se apresure a comprar un boleto para el prximo viaje espacial, ellos
tambin se apresuran a explicar que todas esas realidades son virtualmente imposibles de
comprender por los terrcolas.

13. Descubriremos cmo es que inicio la vida?
No se puede esperar una respuesta afirmativa por el momento, los cientficos esperan
descubrir un planeta similar a la tierra cuando se iniciaba y observarlo para explicarse cmo
es que estuvo la cosa.

14. Controlaremos el clima?
En cierto sentido ya lo estamos haciendo, los millones de emisiones anuales estn causando
que el mundo se caliente entre 2 y 7 grados Fahrenheit al final del siglo XXI. Las
consecuencias sern catastrficas pues los huracanes y las lluvias se darn con mayor
severidad.

15. Correr alguien la milla en menos de tres minutos?
Si los ingenieros de la gentica lograr transferir las caractersticas musculares del Cheeta al
Humano, y parece ser que si podrn, estaremos viendo atletas corriendo la mislla debajo de
los tres minutos.

16. Descubriremos cmo terminar el universo?
El universo durar ms que la humanidad y aunque los cientficos se aventurarn a explicar
algunas teoras, ellos mismos estarn indecisos, as que mejor ah dejarn las cosas.

17. Quedar algo por descubrir?
La respuesta es SI, sern muchas las sorpresas que se llevar la humanidad al ver
transformada en una realidad aquellos que, como en el pasado, imagin como ciencia
ficcin.

Will We Travel to the Stars?
Interplanetary travel is child's play compared with interstellar travel. Nonetheless, we could
make the journeyjust not anytime soon
By FREEMAN DYSON

If you ask whether we will travel to the stars, the answer is probably yes, but it will take a
long time. Maybe 500 years. If you ask whether any human being will travel to the stars
within the 21st century, the answer is certainly no. The difference between traveling to the
nearest star and traveling around our own solar system is about the same as the difference
between swimming across the Atlantic and swimming across the Potomac. To get across
the Atlantic you need to have a boat or an aircraft. To get to the nearest star you need to
have a spacecraft that we have no hope of building within 100 years.

To scoot around the solar system and return within a few years, you need a spacecraft that
will cruise at 100 miles a second. At that speed you will get to Mars in 10 days, to Pluto in
16 months. We can imagine a spacecraft carrying a big area of thin film to collect solar
energy, with an ion-jet engine to produce thrust powerful enough to boost a spacecraft to a
speed of 100 miles a second. It is also possible to build a nuclear-powered jet to do the
same job, if the political objections to nuclear spacecraft can be overcome. The quantity of
energy available from sunlight or from a nuclear reactor is large enough to take us on trips
around our solar system, if we decide to spend the money to do it. We may or may not
decide to build a 100-mile-a-second spacecraft within 100 years, but we know that it is
technically possible. The cost of developing a 100-mile-a-second spacecraft would be large
but not absurd.

On the other hand, the nearest star is about 10,000 times as far away as Pluto. A trip to the
stars within a human lifetime requires a spacecraft that cruises at more than 10,000 miles a
second and accelerates to this speed within 10 years. The engine would have to deliver
about a megawatt of power for every pound of weight of the ship. There is no way an
engine that small and that powerful could keep itself cool. Even if the fuel is something
exotic like antimatter, carrying far more energy than sunlight or uranium, the problem of
cooling the engine remains insuperable. Travel to the stars within this century, using any
kind of engine we know how to build, is not going to happen.

How about unmanned space probes going to the stars? Unmanned probes can be much
smaller and lighter than manned spaceships. That means the total power required for a
probe to reach the stars is much less. But the unmanned probe still needs an engine
delivering one megawatt per pound. The problem of cooling the engine remains the same,
whether the ship is manned or unmanned, and the conclusion is the same. Unmanned
probes are not going to reach the stars within this century.

Robert Forward, an engineer who used to work for Hughes Aircraft and now works
independently, has designed a space probe that might reach the stars, not within this century
but a little later. It avoids the problem of cooling the engine by not having an engine. It is a
sailing ship, not a steamship. He calls it Starwisp. It is a fishnet made of very fine wires and
weighing less than an ounce. The net acts as a sail and is driven by the pressure of radio
waves generated by a huge radio transmitter. The transmitter stays put, with its radio beam
pointing in the direction we wish to explore, and the sail travels along the beam, picking up
momentum from the radio waves. This scheme works beautifully in theory, but there are
some practical difficulties to be overcome. The transmitter has to be gigantic and must
focus the energy of the beam on the fishnet as it accelerates. The fishnet must absorb only a
tiny fraction of the radio waves to avoid being vaporized. The probe must carry instruments
to collect information and transmit signals back to earth, and those instruments must weigh
less than an ounce. There are enough problems here to keep engineers busy for several
centuries, but one day a ship like this will fly.

Ultimately, one can imagine scaling up the Starwisp by a factor of 1 million, so that the
fishnet is big enough to carry human passengers to the stars. The radio transmitter to drive
it would use far more power than all the power stations on earth now generate. Some day
we may have this much power to spare for voyages of exploration, but not soon. Perhaps
around the middle of the third millennium.

Will We Clone a Dinosaur?
If you use DNA taken from its myriad wing descendants, the idea is not as farfetched as it
first appears
By MATT RIDLEY

The short answer is no. The slightly longer answer is definitely not. The Jurassic Park idea -
amber, insects and bits of frog dna - would not work in a million years, and it was by far the
most ingenious suggestion yet made for how to find dinosaur genes. Cloning a mammoth-
flash frozen for several thousand years - might just prove feasible one day. But dinosaurs,
65 million years old? No way.

It is only when you ask the question the third time that you begin to see a glimpse of an
affirmative answer. Start with three premises. First, dinosaurs did not die out; indeed there
are roughly twice as many species of their descendants still here on Earth as there are
mammals, but we call them birds. Second, dna is turning out to be a great deal more
"conserved" than anybody ever imagined. So-called Hox genes that lay down the body plan
in an embryo are so similar in people and fruit flies that they can be used interchangeably,
yet the last common ancestor of people and fruit flies lived about 600 million years ago.

Third, and most exciting, geneticists are finding many "pseudogenes" in human and animal
dna - copies of old, discarded genes. It's a bit like finding the manual for a typewriter bound
into the back of the manual for your latest word-processing software. There may be a lot of
interesting obsolete instructions hidden in our genes.

Put these three premises together and the implication is clear: the dino genes are still out
there. So throw your mind forward a few decades, and try out the following screenplay. A
bunch of bioinformatics nerds in Silicon Valley, looking for an eye-catching project to
showcase the latest ipo, decide to try to recreate the genome of a dinosaur. They bring
together a few complete bird genomes-complete dna texts from the cells of different birds
and start mapping the shared features. The result is a sort of prototype genome for a basic
bird.

Then they start fiddling with it: turning on old pseudogenes; knocking out the genes for
feathers and putting back in the genes for scaly skin; tweaking the genes for the skull so
that teeth appear instead of a beak; shrinking the wings, keel and wishbone (ostrich genes
would be helpful here); massively increasing size and sturdiness of the body; and so on.
Pretty soon they have the recipe for a big, featherless, wingless, toothy-jawed monster that
looks a little like a cross between a dodo and a tiger.

They might not have to fix that many genes - just a few hundred mainly developmental
ones. The genes for the immune system, for memory mechanisms and the like would all be
standard for a vertebrate. To fine-tune the creature, they could go fishing in other bird
genomes, or perhaps import a few ideas from lizards and turtles.
Remember, at this stage nothing has left the computer; all they have is a dna recipe. But by
the end of this century, if not sooner, biotechnology will have reached the point where it
can take just about any dna recipe and read off a passable 3-D interpretation of the animal it
will create. After a massive amount of digital trial and error, the nerds reckon they have a
recipe for a creature that would closely resemble a small, running dinosaur, such as
Struthiomimus ("the ostrich mimic").

The rest is as easy as Dolly the sheep: call up a company that can synthesize the genome,
stick it into an enucleated ostrich ovum, implant the same in an ostrich and sit back to
watch the fun. Of course, there will be teething troubles - literally. Or somebody might
have forgotten to cut out the song bird's voice genes, so the first struth chirps like a
sparrow. Or maybe the brain development did not quite hang together and the creature is
born incapable of normal movement. As this suggests, the first such experiment will almost
certainly produce a bit of a Frankenstein's monster, and the whole idea may well therefore
be cruel and unethical, in which case, let us hope it never happens. But that is not the same
as saying it will be impossible.

And it just might prove much easier than I am implying. Who knows? Rusty old
pseudogenes left over from the great sauropods may still be intact, hidden somewhere in
the genes of a hummingbird.

Will a Killer Asteroid Hit the Earth?
Eventually, yes. But we don't have to take it lying down. Already astronomers are scanning
the skies and preparing to defend the planet
By LEON JAROFF

When it comes to asteroids' wreaking disaster on Earth, the real question is not if, but when.
Two hundred or so large craters and a geological record stretching over billions of years
provide ample evidence that, time and again, explosive impacts by asteroids or comets have
devastated large parts of the planet, wiped out species and threatened the very existence of
terrestrial life. Astronomers are all too aware that more large hulks are out there, hurtling
through space, some of them ultimately destined to collide with Earth.

As scary as this seems, disaster is not inevitable. For after nearly 4 billion years of life on
Earth, a species has evolved that can prevent the next catastrophic encounter if it has the
will to do so. That species is us. Why worry? After all, the most notorious impact of them
all, the one that caused the extinction of the dinosaurs, occurred 65 million years ago.
Really ancient history.

Yet if you want to get contemporaryon a geological scale, of course it was only
49,000 years ago that an iron asteroid blasted out Arizona's 3/4-mile-wide Meteor Crater,
almost certainly killing any living creatures for hundreds of miles around. And as recently
as 1908, a small, rocky asteroid or chunk of a comet exploded five miles above the
Tunguska region of Siberia, felling trees, starting fires and killing wildlife over an area of
more than 1,000 sq. mi. Had the blast, now estimated at tens of megatons, occurred over
New York City or London, hundreds of thousands would have died.

And what about near misses? As recently as 1996, an asteroid about a third of a mile wide
passed within 280,000 miles of Earth a hairbreadth by astronomical standards. It was the
largest object ever observed to pass that close, and had it hit, would have caused an
explosion in the 5,000-to-12,000-megaton range.

What was particularly unnerving about this flyby is that the asteroid was discovered only
four days before it hurtled past Earth. All the more reason for a detection system that will
discover asteroids early, plot their paths and predict, many years in advance, whether they
will eventually threaten Earth.

The good news is that just such a detection system, after a slow start, is rapidly gearing up.
Four small groups of dedicated astronomers in Arizona and California, totaling fewer than
the number of employees at an average fast-food restaurant and using mostly off-the-shelf
equipment for their telescopes, have been mapping the heavens and steadily adding to the
number of known near-Earth objects.

Neos are asteroids or occasional comets that periodically intersect or come close to Earth's
orbit. If a neo cuts through our orbital path at the same time that Earth happens by, it's
curtains for a metropolitan area, a region or even global civilization, depending on the size
of the interloper.
In 1997 the asteroid-hunting pioneers were joined by a precocious upstart, a joint Air
ForceM.I.T. Lincoln Laboratory group supported by generous Pentagon funding. Using
an Air Force satellite-spotting telescope in New Mexico and a camera equipped with an
advanced M.I.T.-designed charge-coupled device, the totally automated, computerized
operation quickly began discovering more asteroids and comets, large and small, than all
the other groups combined. Getting further into the spirit of the game, the Air Force has
deployed a second asteroid-hunting scope, is lending another to astronomers and musing,
unofficially, about launching a fleet of microsatellites for even better asteroid detection.

What to do if an Earth-bound comet or asteroid is discovered? Early detection, preferably
many years in advance, would enable us to send out exploratory spacecraft to determine the
nature of the interloper, much like the spacecraft near's current investigation of the asteroid
Eros. Scientists at the Los Alamos and Lawrence Livermore National Laboratories are
already dreaming up a variety of ingenious defenses against an incoming asteroid.
Depending on its mass and composition, they would use tailor-made nuclear explosions to
pulverize small asteroids or deflect larger ones. Given enough time, and under the proper
circumstances, less drastic measures would be needed. Some schemes call for conventional
explosives alone, or anchoring a rocket motor or a solar sail on an asteroid to alter its orbit
enough to allow it to safely bypass Earth.

At the beginning of 2000, only about half the estimated 500-to-1,000 near-Earth asteroids
six-tenths of a mile across or largerbig enough to cause a global catastrophehad been
detected. One of the unknowns could even now be on a collision course with Earth. The
sudden appearance of long-period comets, usually larger and with better than twice the
impact velocity of asteroids, presents an even greater menace. Such objects (comet Hale-
Bopp was one) are usually not spotted until they begin to flare somewhere out near the orbit
of Jupiter or closer, only a few to 18 months before they pass Earth's orbit. That doesn't
leave much time for defensive measures. Then, too, only a tiny fraction of the more
numerous and smaller neos, some of them potential city killers and tsunami producers, are
yet known.

Someday in the foreseeable future, the first thing that strollers out for an evening walk
might see would be a sudden glow on the horizon. Then, in short order, they would feel the
ground shake, hear a thunderous roar and be incinerated by an onrushing blast of
superheated air. All the more reason to identify and track every single near-Earth object and
prevent a nasty surprise.

Will the Mind Figure Out How the Brain Works?
Understanding how neurons operate is one thing; understanding how they make us the
conscious beings we are is another matter
By STEVEN PINKER

Imagine this scene from the future. You are staring at a screen flickering with snow.
Scientists have hidden one of two patterns in the dots, and eventually you spot one. But you
don't have to tell the scientists what you are seeing; they already know. They are looking at
the electrical signals from one of the billions of cells in your brain. When the cell fires, you
see one pattern; when it stops, you see anotheryour awareness can be read from a single
neuron. Now, in an even more unsettling trick, they send an electrical current to the neurons
in that part of your brain and, with a push of a button, make you see one pattern or the
other.

These feats of pinpoint mind study are not fantasies. They have already been performed by
the Stanford neuroscientist William Newsome. Not with people, of course, but with
monkeys. Yet few scientists doubt the trick would work with us.

This is just one example of how much we have learned about the workings of the brain in
the past 10 yearsa period of intense research proclaimed by the U.S. Congress and the
President as the Decade of the Brain. Every facet of mind, from mental images to the moral
sense, from mundane memories to acts of genius, have now been tied to tracts of neural real
estate. Using fmri, a new scanning technique that measures blood flow, scientists can tell
whether the owner of the brain is imagining a face or a place. They can knock out a gene
and prevent a mouse from learning, or insert extra copies and make it learn better. They can
see the shrunken wrinkles that let a murderer kill without conscience, and the overgrown
folds that let an Einstein deduce the secrets of the universe. How far will this revolution go?
Will we ever understand the brain as well as we understand the heart, say, or the kidney?
Will mad scientists or dictators have the means to control our thoughts? Will neurologists
scan our brains down to the last synapse and duplicate the wiring in a silicon chip, giving
our minds eternal life?

No one can say. The human brain is the most complex object in the known universe, with
billions of chattering neurons connected by trillions of synapses. No scientific problem
compares with it. (The Human Genome Project, which is trying to read a long molecular
string composed of four letters, is a snap by comparison.) Cognitive neuroscience is
attracting so many brilliant minds that it would be foolish to predict that we will never
understand how the brain gives rise to the mind. But the problem is so hard that it would be
just as foolish to predict that we will.

One challenge is that we are still clueless about how the brain represents the content of our
thoughts and feelings. Yes, we may know where jealousy happensor visual images or
spoken wordsbut "where" is not the same as "how." We don't know how the brain holds
the logical connections among ideas that spell the difference between "Burr slew Hamilton"
and "Hamilton slew Burr," between the image of a woman winking to realign her contact
lens and that of a woman winking to flirt. These distinctions don't appear as blobs in a brain
scan. They arise from the microcircuitry of the living human brain, and most people don't
want to donate their brains to science until they are dead. (As Woody Allen said, "It's my
second-favorite organ.") The content of our thoughts may be the province of psychologists
studying the brain's software, rather than neurobiologists studying its hardware, for a long
time.

Another challenge is understanding how the mere darting of ions and oozing of
neurochemicals can create the vivid first-person present-tense subjective experience of
colors, sounds, itches and epiphanies that make up the selfthe soul, if you will. There's no
doubt that physiological brain activity is the cause of experience. Thoughts and feelings can
be started, stopped or altered by electricity and chemicals, and they throw off signals that
can be read with electrodes and other assays. I also have little doubt that we will crack the
mystery of consciousness, learning which brain events correlate with experience. Just
compare brain activity when a person is awake or anesthetized, or when a novice is
thinking about his golf swing and when a pro does it automatically.

But why some kinds of brain activity feel like something to youor, more accurately, are
youis another question, and scientists disagree about how to answer it. Some say that
subjective experience is unobservable and not a proper topic for science. Some say that
once we can distinguish conscious brain processes from unconscious ones and show how
they cause behavior, there is nothing left to explain; people who are looking for some extra
ingredient are just confused. Some concede that sentience is still a mystery, but expect that
an unborn genius will someday explain it to us as all. Still others suspect that the brain did
not evolve to grasp the answer, any more than it can visualize what came before the Big
Bang or the shape of a curved 4-D universe. If you think the answer is obvious, you are
prepared for the ultimate triumph of the brain science of tomorrow. The synapse scanner
has been perfected, and you can download a backup copy of your mind into a brainlike
computer that will outlast your body. Unfortunately the scanner destroys the tissue it scans,
so you have to choose between your old brain and a new one. The new brain will react and
behave exactly like youbut would it be you? If you say yes, are you confident enough to
step into the scanner?

Will We Keep Evolving?
As we spend more time online, our brains will get bigger and our eyes will get weaker,
right? Wrong. That's not how evolution works
By IAN TATTERSALL

We take it for granted that human beings are the pinnacle of the living world, that Homo
sapiens is nature's most advanced and magnificently burnished product. And since
numerous lines of evidence testify that our species is the result of a long evolutionary
history, we tend to assume that in the future there will be more perfecting change along
essentially the same lines.

Two million years ago, for example, our predecessors had brains barely half as large as
ours today. So it would seem to follow that in another couple of million years, our brains
will be twice again as large, housed in the huge globular heads familiar from innumerable
sci-fi images. Conversely, our immediate forebears were robustly boned and, we think,
more heavily muscled than we are today. What could be more natural than to conclude that
supported by increasingly complex labor-saving technologies, our bodies will in future be
frailer and shorn of such frivolities as the little toe?

Back in the 1930s, my predecessor at the American Museum of Natural History in New
York City, Harry Shapiro, while sensibly warning of the "dangers of prophecy," wondered
what humans might become a half-million years hence. His predictions included such
features as a rounder skull, a smoothing of the area above the brows, a reduction in the size
and number of teeth, and a shrinking of the face in general. Shapiro also predicted that we
would get taller and even balder and that body hair would continue to diminish.

When Shapiro revisited the subject three decades later, his vision of the future was
essentially unchanged except that he had become increasingly worried about the potential
effects of technology. Many today share similar beliefs, assuming, for example, that lives
spent in front of computers will rob humans of fully functional arms and legs or proper
eyesight. Or that genetic engineering will lead to such calamities as a world populated by
Bill Gates clones.

Well, not to worry. As seductive as such extrapolations may be, they overlook what we
know about how evolution works. In particular, they buy into the idea that evolution
consists of a sort of generation-by-generation fine-tuning in each population as time passes.
Under the benign guidance of natural selection (the name we give to any and all factors that
promote or inhibit successful reproduction by members of those populations), this process
of gradual change inexorably leads to improvement in the species and ultimately to new
species as those improvements accumulate.
It's the same with populations and species. Species are out there competing with others in a
real world of limited resources. They cannot survive as disembodied attributes. What's
more, the ecologies of which they form a part have an alarming tendency to change
abruptly. If your habitat is covered by an ice sheet, it's pretty irrelevant how well you are
adapted to the meadows and forests now buried beneath the ice.
Finally, we have to bear in mind how distinctive new species originate. We don't
understand everything about how this happens, but we do know that in large interbreeding
populations, it is extremely difficult, if not impossible, for new genetic variants to become
established. If any meaningful innovations are to become fixed in a population or if a
population is to become established as a new species, it is essential that the population be
small. Large populations simply have too much genetic inertia.

You can guess where this argument is heading. During the Ice Ages, when our own species
emerged, human populations were small and scattered and were continuously disrupted by
climatic fluctuations. Conditions were ideal for genetic innovation. Today, however, the
human population is 6 billion and mushrooming and increasingly densely distributed. At
the same time, individual humans are incomparably more mobile than ever before. Efficient
communication means that, for example, American males can advertise for wives in
journals distributed halfway around the globe.

The upshot is that after a period of diversification, Homo sapiens is in a mode of
reintegration, as witness the fact that the boundaries between the former geographical
variants of our species are becoming increasingly blurred. If present trends continue, those
boundaries will become blurrier still. Amid all this, the conditions for incorporating
meaningful new innovations into human populations have all but disappeared - and with
them the prospects for significant evolutionary change.

Of course, such predictions are based on the assumption that current conditions will prevail
into the foreseeable future, and it is quite possible that this assumption is wrong. Anything
that would serve to fragment the current huge human population might help re-establish the
conditions necessary for future human change. Unfortunately, we would undoubtedly
perceive such an event as a terrible disaster, since it would necessarily entail the
disappearance of billions of human beings.

For example, an asteroid impact of the kind that finished off the dinosaurs might do the
trick, as might the appearance of a super-virulent and highly contagious virus. More
probable, perhaps, is a man-made catastrophe - a general environmental collapse provoked
by overexploitation of the world's resources, say, or nuclear conflicts. The awful
possibilities are limited only by the imagination.

This being the case, all we can do is hope that things stay pretty much the way they are-
which is both good and bad. The bad news is that we will forever remain the willful,
capricious, appalling creatures that we have been ever since our species emerged during the
late Ice Ages. The good news is that we will remain the creative, interesting, caring,
wonderful creatures that we have been for exactly as long. Over the long haul, human
nature will almost certainly stay as idiosyncratic as it is today. It's unrealistic to hope that
evolution will ride in on its white horse in the next 50 - or even 500,000 - years to improve
the breed and save us from our follies. We shall have to learn to live with ourselves the way
we are.

Will We Travel Back (Or Forward) in Time?
Einstein proved we can travel forward by moving near light speed. Backward requires a
wormhole, cosmic string and a lot of luck
By J. RICHARD GOTT III

We can travel through the three dimensions of space pretty much at will moving forward
or back, left or right, up or down without even thinking about it. When it comes to the
fourth dimension, though, we appear to be stuck. Time flows on in one direction only, and
we flow with it like corks bobbing helplessly in a river. So the idea of traveling through
time, as opposed to with time, is immensely seductive. Who wouldn't want to know what
technology will look like in the year 3000, or witness the assassination of Julius Caesar?

Not only does such a thing seem extremely difficult, but it could also be a little risky. What
if you prevented Caesar's assassination and changed history? What if you accidentally
killed someone who happened to be your own ancestor? Then you wouldn't have been born,
and couldn't have killed your ancestor, so you could be born after all to go back and ... well,
you get the idea.

We physicists are mindful of all these difficulties, of course. But we can't resist exploring
the notion of time travel not necessarily for practical reasons, but to understand the
limits of our theories.

Do the laws of physics permit time travel, even in principle? They may in the subatomic
world. A positron (the antiparticle associated with the electron) can be considered to be an
electron going backward in time. Thus, if we create an electron-positron pair and the
positron later annihilates in a collision with another, different electron, we could view this
as a single electron executing a zigzag, N-shaped path through time: forward in time as an
electron, then backward in time as a positron, then forward in time again as an electron.

The probability of a macroscopic object like a human doing this trick is infinitesimal.
But thanks to Albert Einstein we know that time travel of a different sort does happen in the
macroscopic world. As he showed back in 1905 with his special theory of relativity, time
slows down for objects moving close to the speed of light, at least from the viewpoint of a
stationary observer. You want to visit the earth 1,000 years from now? Just travel to a star
500 light-years away and return, going both ways at 99.995% the speed of light. When you
return, the earth will be 1,000 years older, but you'll have aged only 10 years. I already
know a time traveler. My friend, astronaut Story Musgrave, who helped repair the Hubble
Space Telescope, spent 53.4 days in orbit. He is thus more than a millisecond younger than
he would have been if he had stayed home. The effect is small, because he traveled very
slowly relative to the speed of light, but it's real.

With more money, we could do better in the next century but only a little. If we sent an
astronaut to the planet Mercury and she lived there for 30 years before returning, she would
be about 22 seconds younger than if she had stayed on Earth. Clocks on Mercury tick more
slowly than those on Earth because Mercury circles the sun at a faster speed (and also
because Mercury is deeper in the sun's gravitational field; gravity affects clocks much as
velocity does). Astronauts traveling away from Earth to a distance of 0.1 light years and
returning at 1% the speed of light would arrive back 8.8 hours younger than if they hadn't
gone.

The downside of traveling into the future this way is that you might be stuck there. Is there
any way of going backward in time? Once again, Einstein may have provided the answer.
His 1915 theory of general relativity showed that space and time are curved, and that the
curvature can be large in the neighborhood of very massive objects. If an object is dense
enough, the curvature can become nearly infinite, perhaps opening a tunnel that connects
distant regions of space-time as though they were next door. Physicists call this tunnel a
wormhole, in an analogy to the shortcut a worm eats from one side of a curved apple to the
other.

In 1988, Kip Thorne, a physicist at Caltech, and several colleagues suggested that you
could use such a wormhole to travel into the past. Here's how you do it: move one mouth of
the wormhole through space at nearly the speed of light while leaving the other one fixed.
Then jump in through the moving end. Like a moving astronaut, this end ages less, so it
connects back to an earlier time on the fixed end. When you pop out through the fixed end
an instant later, you'll find that you've emerged in your own past.

The problem with wormholes is that the openings are microscopic and tend to snap shut a
fraction of a second after they're created. The only way to keep them open, as far as we
know, is with matter that has negative density. In layman's terms, that's stuff that weighs
less than nothing. This may sound impossible, but the Dutch physicist Hendrik Casimir
theorized in 1948 that holding two plates of electrically conducting material very close
together in a vacuum actually does create a region of negative density that exerts an inward
pressure on the plates. The force predicted by Casimir has been verified in the laboratory.
Using this idea, Thorne and his colleagues proposed constructing a wormhole tunnel 600
million miles in circumference, with Casimir plates separated by only 400 proton diameters
at the midpoint. Time travelers would have to somehow open doors in these plates to pass
through the wormhole. The mass required for construction? Two hundred million times the
mass of the sun. These are projects only a supercivilization could attempt not something
for 21st century engineers.

In 1991 I found another possible mechanism for time travel using cosmic strings, thin
strands of energy millions of light-years long, predicted by some theories of particle
physics (but not yet observed in the universe). You could try to construct a cosmic-string
time machine by finding a large loop of cosmic string and somehow manipulating it so it
would contract rapidly under its own tension, like a rubber band. The extraordinary energy
density of the string curves space-time sharply, and by flying a spaceship around the two
sides of the loop as they pass each other at nearly the speed of light, you'd travel into the
past.

To go back in time by one year, unfortunately, you'd need a loop containing about half the
mass-energy of an entire galaxy. Worse yet, the contracting cosmic-string loop would
probably trigger the formation of a rotating black hole, trapping any time-travel regions
inside. You would almost certainly be torn apart by near infinite space curvature before you
could travel anywhere. Even if you could get past these difficulties, the physics of both
types of time machine dictate that you can't go back in time to an epoch before the time
machine was created. So you couldn't meet and perhaps kill your own ancestor. But if such
a machine were built today, your descendants might come and kill you, changing their own
past.

Some argue conservatively that time travelers don't change the past; they were always part
of it. On the other hand, paradoxical though this sounds, a version of the many-worlds
theory of quantum mechanics (see "Will We Discover Another Universe?" in this issue)
devised by Oxford physicist David Deutsch might allow such history-changing visits. In
this picture, there are many interlacing world histories, so that if you went back in time and
killed your grandmother when she was a young girl, this would simply cause space-time to
branch off into a new parallel universe that doesn't interfere with the familiar one.

Stephen Hawking has addressed the problem in a different way, proposing what he calls a
chronology-protection conjecture. Somehow, he argues, the laws of physics must always
conspire to prevent travel into the past. He believes that quantum effects, coupled with
other constraints, will always step in to prevent time machines. The jury is still out on this
question. We may need to develop a theory of quantum gravity to learn whether Hawking is
right.

So, will we time-travel in the next century? Travel to the future yes, but only in short
hops, I suspect. To the past very likely not. Such travel is expensive, dangerous and
subject to quantum effects that may or may not spoil your chances of coming back alive.
Those of us working in this field aren't rushing to the patent office with time-machine
blueprints. But we are interested in knowing whether time machines are possible, even in
principle, because answering that question will tell us where the boundaries of physics lie
and provide clues to how the universe works.

Will We Live on Mars?
If we flew on the cheap and lived on the Red Planet's resources, we could be a two-world
species within a generation
By JEFFREY KLUGER

There's probably no special reason to start looking forward to October 2007-unless you're
one of the four people who could be traveling to Mars that month. October 2007 is a good
time to begin your journey, since right about then, Earth and Mars will be drifting into an
alignment that will allow you to make the trip in less than eight months. It's impossible to
say what part of Mars you'll be touching down on, but odds are you'll land somewhere near
the broad equatorial belt. While temperatures elsewhere on Mars fall to a murderous -
140C, they can climb to a shirt-sleeve 20C in the planet's tropics-not that Mars' thin, toxic
air would ever allow you to strip down to your shirt sleeves.

Inhospitable as such a Red Planet redoubt would be, two years after you arrive, another
crew will show up, and another two years after that, and another after that. By 2017-about
the time that children born this year approach voting age-mankind's first tiny settlement on
another world may be taking hold. Even for a supposedly spacefaring people, dreaming of
Mars is dreaming big.

Back when Apollo astronauts were routinely bunny-hopping on the nearby moon, Mars
seemed like an obvious next goal. But during the past 25 years, the best we've been able to
muster has been a few unmanned Martian probes. After the two most recent ones famously
flamed out, and after last week's scathing report blaming nasa mismanagement for the
failures, even that seems beyond us.

And yet Mars is back on the cosmic itinerary. Scientists at nasa and in the private sector
have been quietly scribbling out flight plans and sketching out vehicles that-so they say-
could make manned landings on the Red Planet not only possible but also economically
practical. The hardware, they believe, is largely in hand. The funds, they argue, could be
within reach. "Within 25 years," says nasa's Bret Drake, director of mission studies at the
Johnson Space Center in Houston, "I project that we could have human exploration of Mars
being conducted routinely."

The key to reaching Mars is doing it smart and doing it cheap. In 1989, during the 20th
anniversary of the Apollo 11 lunar landing, President Bush challenged nasa to figure out
how to put human beings on Mars. The space agency came back with an elephantine 30-
year plan that involved construction bays and fuel depots in low-Earth orbit and carried a
jaw-dropping price tag of $450 billion.

What drove up the cost of the project was the size of the spacecraft needed to reach Mars,
and what drove up the size of the spacecraft was all the fuel and other consumables it
would need to carry with it on so long a trip. But while Mars is indeed remote-at its farthest
it's 1,000 times as distant as the moon-it has a lot of things the moon doesn't, most notably
an atmosphere. And that makes all the difference.

For the past decade-ever since nasa's 1989 proposal laid its half-trillion-dollar egg-the
space community has been intrigued by a mission scenario known as the Mars Direct plan.
Developed by engineers at Martin Marietta Astronautics, a nasa contractor, Mars Direct
calls not merely for visiting the Red Planet but also for living off the alien land.

As early as 2005, when Earth and Mars are in their once-every-26-months alignment, the
plan envisions launching a four-person spacecraft to Mars-but launching it with its tanks
empty of fuel and its cabin empty of crew. Landing on the surface, the craft would begin
pumping Martian atmosphere-which is 95% carbon dioxide-into a reaction chamber, where
it would be exposed to hydrogen and broken down into methane, water and oxygen.
Methane and oxygen make a first-rate rocket fuel; water and oxygen are necessary human
fuels. All these consumables could be pumped into tanks inside the ship and stored there.
Two years later, when Mars and Earth are again in conjunction, another spacecraft-this one
carrying a crew-would be sent to join the robot ship on the surface. The astronauts could
work on Mars for 18 months, living principally in their arrival craft, and then, at the end of
their stay, abandon that ship, climb into the robot craft and blast off for home. "Fly several
of these missions," says Robert Zubrin, author of the book The Case for Mars and one of
the engineers who developed the plan, "and you leave the surface scattered with a series of
warming huts that serve as the beginnings of a base."

What makes the Mars Direct plan remarkable is how unremarkable the science behind it is.
The spacecraft in which the astronauts will live are descendants of the same pressurized
vessels nasa has been building since the Mercury days. The boosters that will lift the ships
off the ground are reconfigured engines cannibalized from the shuttle. The technology
needed to distill the Martian atmosphere is the stuff of first-year chemistry texts. For this
reason, Zubrin believes, Mars Direct could be surprisingly affordable: about $40 billion for
five missions, or less than half the cost of the Apollo program in today's dollars.

But is traveling to Mars on the cheap the best way to go? As the recent failures of nasa's
unmanned Mars probes suggest, makeshift machines built with off-the-shelf parts may save
money, but when it comes time to fly, they often fall short. At the Johnson Space Center,
engineers are thus looking at other Mars scenarios that still include frugal, on-site fuel
manufacturing but also call for six-person crews, bigger vehicles and Apollo-style
motherships in Martian orbit. "We're trying to take the best ideas and fold them into a
reasonable approach," says Drake.

Whichever approach is chosen, what all of them have in common is the speed with which
they could be pulled off. Unlike the early Apollo planners, who weren't even sure they
could get astronauts into near-Earth space, much less fling them out to the moon, Mars-
mission directors have the basic space-travel technology down cold. All they need is the
go-ahead to design and build their machines. Until they get the nod, the Mars partisans
have to find ways to keep busy. Research teams from nasa and the Mars Society (a private
advocacy group) are conducting expeditions to Devon Island in the Canadian Arctic-a place
about as similar to the freeze-dried Martian wasteland as you're likely to find anywhere on
Earth-to practice survival skills and exploration techniques. Teams at the Johnson Space
Center are refining their mission scenarios and crunching their numbers to keep the costs as
low as possible. "For now," says Zubrin, "the only thing between us and Mars is a political
decision to go." That kind of hurdle, of course, is often the highest of all.

Will We Meet E.T.?
Most scientists used to believe that we would eventually encounter extraterrestrial life, even
if it were microscopic. Now they're not so sure
By FREDERIC GOLDEN

We've come a long way since 1600, when Giordano Bruno, a defrocked priest from Naples,
was burned at the stake for espousing, among other things, his belief that there might be
other worlds and other life-forms beyond Earth. In our Star Trekking age, it's now almost
heretical not to believe in extraterrestrial lifea belief that will surely be fortified by last
week's announcement of the discovery of two Saturn-size planets around two distant stars.

Polls show that 54% of Americans are convinced that there are aliens out there, to say
nothing of the significant fraction (30%) who suspect we've already been visited by them.

If there really is life elsewhere in the universe, what are the odds of finding it in our
lifetimeor even our children's? Hunting for extraterrestrials, smart or otherwise, requires
a lot of faith. You have to believe that conditions for life (liquid water, mild temperatures,
protection from lethal radiation) are not unique to Earth; that under the right circumstances,
life can arise fairly easily; and that if it does reach a level advanced enough to broadcast its
presence, it won't destroy itself in a nuclear war or an environmental meltdown before
firing off Earth-bound communiques.

That's plenty of ifs for skeptical scientists to swallow. As physicist Enrico Fermi liked to
say, if there are so many extraterrestrials out there, why haven't we heard from them?

To some curmudgeonly types, all this E.T. talk is pretty brainless. Evolutionary biologist
Ernst Mayr, for one, considers the likelihood of life of any sort beyond our planet close to
zilch. Says he: "The chance that this improbable phenomenon [the creation of life] could
have occurred several times is exceedingly small, no matter how many millions of planets
in the universe."

Paleontologist Peter Ward and astronomer Donald Brownlee agree. In a provocative new
book, Rare Earth, they maintain that in most places beyond Earth, radiation and heat levels
are so high, life-friendly planets so scarce and the cosmic bombardmentslike the one that
killed off the dinosaurs 65 million years agoso severe that the only life-forms that might
make it would be bacteria-like critters living deep in the soil. The odds against
technologically advanced societies, they argue, are astronomical.

Surprisingly, even Geoff Marcy, the leader in the increasingly successful hunt for planets
outside our solar system, feels that we may well be alone in the universe. Most of the 33
newly discovered planetsgiant gas bags all, including those two new onesswing so
erratically around their parent stars that they would create havoc on any smaller, nearby,
life-friendly planets.

But such pessimism represents a minority view among scientists, at least those with their
eyes on the stars. "In this business, you have to remain optimistic," says radio astronomer
Frank Drake, who kicked off the original Search for Extraterrestrial Intelligence (SETI) in
1960 with his whimsically named Project Ozma (after the satiric Oz books). He "looked"
briefly at several nearby sunlike stars in the hope that they might be orbited by planets
whose inhabitants were sending out intelligible signals, like the flood of radio and TV
broadcasts we've been inadvertently blasting into space for the past 80 years. ("Hey there,
Alpha Centauri! Whaddya think of The Honeymooners?")
Drake, alas, detected nary a peep. Nor has anyone else since then. Even after spending
thousands of hours scanning the skies, at myriad frequencies, at a cost of more than $100
million astronomers have yet to detect a single credible signal, though the most distant star
probed is barely 1% of the way across the galaxy.

Still, Drake, 70, who devised the definitive equation for calculating the possible number of
technologically advanced civilizations in our Milky Way galaxy, remains convinced that he
will be around when one of them calls. "We're just at the beginning of our search," says
Drake, who reckons that there are some 10,000 high-tech worlds scattered among the Milky
Way's 100 billion or more stars. That's a much more modest figure than the late Carl
Sagan's estimate of 1 million intelligent civilizations in just our galaxyone of perhaps
100 billion galaxies scattered through the universe.

For years Congress funded various SETI effortsuntil the political stigma of paying for
the quest for "little green men," as cynics like to call them, scuttled federal funding in 1993.
Nonetheless, NASA continues the search for unearthly life, even if it's only for little green
bugs, under the more politically palatable label of astrobiology. Right now, NASA is
eyeing the dusty surface of Mars (where water once flowed) and the likely ocean under the
ice of Jupiter's moon Europa as sites for primitive life-forms. One recent false alarm: the
much trumpeted Martian meteorite found in Antarctica apparently does not contain
convincing evidence of the existence of microorganisms on the Red Planet, as originally
claimed.

But the grander dreamof contacting extraintelligent E.T.s like those canal-building
Martians imagined by the early 20th century astronomer Percival Lowelllives on in the
radio and optical searches underwritten by private outfits like Drake's SETI Institute and
the Planetary Society. And even scientists dubious of success don't want to be spoilsports.
They agree on the importance of continuing the quest, not just for microbes on Mars or
Europa but also for those faint signals from some remote worldif only to underscore the
preciousness of life and the importance of protecting perhaps its lone example. Admits
Drake: "Even a negative answer is better than no answer at all."

Maybe you'd like to try tracking down E.T. yourself? If you have a PC or Mac that sits idle
at least a few hours a day, you can join the 1.7 million people who have downloaded
SETI@home, a free screen saver (available at setiathome.ssl.berkeley.edu) that uses your
computer's downtime to help sort through the reams of noisy static gathered by radio
telescopes. The odds of pulling a Jodie Foster (who snared the elusive extraterrestrial signal
in the 1997 sci-fi flick Contact) are a zillion to one. But if you failor even if you
succeednobody's going to burn you at the stake.

Will Someone Build a Perpetual Motion Machine?
Getting something for nothing may violate fundamental laws of physicsbut that won't
keep inventors from trying
By MICHAEL D. LEMONICK

Human nature stays constant enough that it's easy to answer this one. Yes, someone will
build a perpetual-motion machine in the next few years. Or, more likely, dozens of
perpetual-motion machines, as starry-eyed inventors have been doing since medieval times.
Not a single one will work, but they'll keep trying. Nothing is more seductive, after all, than
the idea of a free lunch.

That, in essence, is all a perpetual-motion machine amounts to: a device that operates
without any external power source. Or, rather, doesn't operatefor perpetual-motion
devices are mechanical impossibilitiesand unless someone finds a way to repeal two
fundamental laws of physics, they always will be. The regulations in question are the first
and second laws of thermodynamics. They say, respectively, that energy can be neither
created nor destroyed but only changed in form and that it's impossible to make a machine
that doesn't waste at least a little energy. In short, you can't win, and you can't even break
even where energy is concerned.

Physicists didn't figure this out until the 1800s, so at least the early advocates of perpetual
motion had the excuse of ignorance. In 1618, for example, a London doctor named Robert
Fludd invented a waterwheel that needed no river to drive it. Water poured into his system
would, in theory, turn a wheel that would power a pump that would cause the water to flow
back over the wheel that would power the pump, and so on. But the second law means that
any friction created by wheel and pump would turn into heat and noise; reconverting that
into mechanical energy would take an external power source. Even if the machine were
friction-free, the wheel couldn't grind grain. That would require energy beyond what it took
to keep the wheel itself going. No good, says the first law, and Fludd's invention was a dud.

Once thermodynamics was codified, it was clear that Fludd and the hundreds who followed
him had been doomed to failure before they began. Yet if anything, learning that their task
was impossible spurred perpetual-motion fanatics on to even greater efforts. So many
hopefuls continued to apply for perpetual-motion patents that in 1911 the U.S. Patent
Office decreed it would henceforth accept working models onlyand they had to work for
a year to qualify. No one has pulled it off so far.

Instead, the perpetuals have become more sophisticated. Most (though not all) now admit
their machines are using outside energyusually via new theories of physics that
physicists don't grasp yet. Joseph Newman, for example, a Mississippi inventor, promoted
an "Energy Machine" in the 1980s that operated via "gyroscopic particles." More recently,
New Jersey inventor Randell Mills has been pushing power from "hydrinos." Still others
claim they're tapping the "zero-point energy" that fills all space. The first two are
considered nonsensical, and while zero-point energy has a basis in science, using it to run a
machine does not.

Denied a fair hearing by other scientists (so they always insist), these outsiders promote
their work as best they canthrough press releases, lectures and, in one gutsy case, a full-
page ad in the journal Physics Today. Maybe someday a latter-day Einstein will overrule
the energy laws we know. Until then, perpetual motion will be an impossible dream that's
impossible to resist.

Can We Save California?
Predicting earthquakes is one thing; preventing them would be something else
By DICK THOMPSON

Sometime in the next 30 years, according to the most recent forecast from the U.S.
Geological Survey, a large portion of the San Francisco Bay Area will jump more than 3 ft.
in less than 30 sec., shaking the ground for perhaps 100 miles and triggering an earthquake
with a magnitude of 6.7. Bridges will buckle. Apartment buildings will pancake. The dorms
at the University of California, Berkeley, will roll like barrels on a wave. Water, power and
transportation lines will be cut. The subway that runs under the bay could be a death trap.
By the time the dust settles, more than 100,000 people will be homeless. Economic losses
will total at least $35 billion. The human cost will be immeasurable.

Is there a way to prevent this catastrophe? Probably not within the next 30 years, but
perhaps some day. No plans to stop a quake in Northern California are seriously being
considered, and even if researchers had one they would like to try, enormous scientific and
legal barriers would stand in the way. But new science, some accidental observations and a
novel "microscope" to study the spawning grounds of earthquakes have for the first time
made quake prevention something worth considering.

The basic science is pretty straightforward. The earth lurches from time to time because its
outer shell is broken into 11 huge, solid plates floating on a layer of molten rock that has
the consistency of Silly Putty. These tectonic plates are constantly jostling each other, like
rafts crowded into a small pond, and it's along the boundaries where they meet that most
quakes are born. The two plates that form California's infamous San Andreas Fault, the
Pacific and the North American plates, are the largest on Earth. And they're moving
inexorably in opposite directions.

Complicating matters is the fact that the plates don't slide past each other very smoothly.
Like two giant pieces of sandpaper, they often get stucksometimes for hundreds of years.
The pressure keeps building until somethingoften a juncture just a few miles below the
earth's surfacesnaps. Then the two plates move abruptly, sometimes great distances. A
California quake in 1857 separated fences and roads by as much as 30 ft.

You might think that bolting the two plates together would fix the problem or at least buy a
little time. But given the forces involved, holding the San Francisco Bay Area together for
even a brief period of time would require bolts the size of the World Trade Center towers
an engineering feat that not even a modern-day Pharaoh could afford.

And where would you put them? The great San Francisco quake of 1906 cracked the earth
across 350 miles, about one-third of the northern San Andreas Fault. To make things worse,
California is riddled with faults that are smaller and not so well mapped as the San
Andreas. For example, the 1994 Northridge quake, which registered a magnitude of 6.6 and
caused $20 billion in damage, occurred on a fault no one knew was there at all.

But the biggest problem with such a plan is that even when bolted together, the plates
would continue to build up stress, and that stress would have to be relieved somewhere. A
much better approach would be to relieve the stress graduallywith a lot of small
quakesrather than let it accumulate. If man-made quakes could move the fault just 1.67
in. a year, the Big One on the northern San Andreas could be averted.

As it happens, scientists have stumbled on several ways to do just that. When the lake
behind the Hoover Dam was first filled, it triggered quakes in a region that had been
seismically inactive. Nuclear-weapons designers found that they were also generating
quakes at the Nevada Test Site when they detonated underground blasts. But the real
breakthrough came when the U.S. Army began pumping liquid wastes into the ground near
Denver at their Rocky Mountain Arsenal and discovered that the pumping was setting off
tiny artificial quakes. Scientists studying the phenomenon found that the fluids were
lubricating the fault boundaries, allowing them to slip past each other.
This work stimulated a lot of excitement among geologists. They speculated that stress
along the San Andreas could be relieved by pumping water deep into the fault, allowing the
Pacific and North American plates to squish by each other gradually, without lurching. The
numbers, however, were not on their side. Lubricating the entire fault would require
enormous amounts of drilling, flooding and pumping. And since no one knew where the
quake might originate, the job would have to be done along a broad stretch of the fault line.
They needed a focal point.

That's where the so-called initiation points come in. "All earthquakes start in small areas,"
says William Ellsworth, who has studied quakes in Southern California as a member of the
USCG Earthquake Hazards Team. These initiation points can be as small as a foot or as
large as a mile or two across. Instead of lubricating 350 miles, it might be possible to
concentrate on just eight or 10 milescostly, but not prohibitive.

Unfortunately, scientists understand very little about the dynamics of initiation points
how smooth or rough the faults are, what kind of rock is found there, what goes on just
before and after a quake. Laboratory experiments can reproduce the pressures to simulate
the environment at the initiation point, but without knowing the effects of pressure building
up over the centuries, any attempt to release it could backfire. Says Mary Lou Zoback, chief
scientist of the hazards team: "If we generated a 1906 earthquake, that would be the end of
the USCG." And maybe Los Angeles too.

So the National Science Foundation will explore that critical environmentif Congress
approves the $17.4 million projectas early as next year. As part of project EarthScope,
quake researchers hope to create a geological "microscope" by drilling a hole beside the
San Andreas at one of its most active regions, turning their drill 45[degrees] at 11/2 miles
deep, and then boring right through the fault. This would give scientists their first direct
access to an earthquake-initiation site, where they could set up monitors to observe changes
in temperature, fluid pressure, gas composition and all the other vital signs of geological
activity.

Once scientists understand precisely what initiates earthquakes, they will be in a better
position to think about preventing them. That may be the challenge for earthquake
specialists in the 22nd century.

Will We Have A Final Theory Of Everything?
Tying together relativity and quantum physics might require 10-dimensional "string"-or
something even stranger
By STEVEN WEINBERG

The 20th century was quite a time for physicists. By the mid-1970s we had in hand the so-
called standard model, a theory that accurately describes all the forces and particles we
observe in our laboratories and provides a basis for understanding virtually everything else
in physical science.

No, we don't actually understand everythingthere are many things, from the turbulence of
ocean currents to the folding of protein molecules, that cannot be understood without
radical improvements in our methods of calculation. They will provide plenty of interesting
continued employment for theorists and experimenters for the foreseeable future. But no
new freestanding scientific principles are needed to understand these phenomena. The
standard model provides all the fundamental principles we need.

There is one force, though, that is not covered by the standard model: the force of gravity.
Einstein's general theory of relativity gives a good account of gravitation at ordinary
distances, and if we like, we can tack it on to the standard model. But serious mathematical
inconsistencies turn up when we try to apply it to particles separated by tiny distances
distances about 10 million billion times smaller than those probed in the most powerful
particle accelerators.

Even apart from its problems in describing gravitation, however, the standard model in its
present form has too many arbitrary features. Its equations contain too many constants of
naturesuch as the masses of the elementary particles and the strength of the fundamental
units of electric chargethat are there for no other reason than that they seem to work. In
writing these equations, physicists simply plugged in whatever values made the predictions
of the theory agree with experimental results.

There are reasons to believe that these two problems are really the same problem. That is,
we think that when we learn how to make a mathematically consistent theory that governs
both gravitation and the forces already described by the standard model, all those seemingly
arbitrary properties will turn out to be what they are because this is the only way that the
theory can be mathematically consistent.

One clue that this should be true is a calculation showing that although the strengths of the
various forces seem very different when measured in our laboratories, they would all be
equal if they could be measured at tiny distancesdistances close to those at which the
above-mentioned inconsistencies begin to show up.

Theorists have even identified a candidate for a consistent unified theory of gravitation and
all the other forces: superstring theory. In some versions, it proposes that what appear to us
as particles are really stringlike loops that exist in a space-time with 10 dimensions. But we
don't yet understand all the principles of this theory, and even if we did, we would not
know how to use the theory to make predictions that we can test in the laboratory.

Such an understanding could be achieved tomorrow by some bright graduate student, or it
could just as easily take another century or so. It may be accomplished by pure
mathematical deduction from some fundamental new physical principle that just happens to
occur to someone, but it is more likely to need the inspiration of new experimental
discoveries.

We would like to be able to judge the correctness of a new fundamental theory by making
measurements of what happens at scales 10 million billion times smaller than those probed
in today's laboratories, but this may always be impossible. With any technology we can
now imagine, measurements like those would take more than the economic resources of the
whole human race.
Even without new experiments, it may be possible to judge a final theory by whether it
explains all the apparently arbitrary aspects of the standard model. But there are
explanations and explanations. We would not be satisfied with a theory that explains the
standard model in terms of something complicated and arbitrary, in the way astronomers
before Copernicus explained the motions of planets by piling epicycles upon epicycles.

To qualify as an explanation, a fundamental theory has to be simplenot necessarily a few
short equations, but equations that are based on a simple physical principle, in the way that
the equations of general relativity are based on the principle that gravitation is an effect of
the curvature of space-time. And the theory also has to be compellingit has to give us the
feeling that it could scarcely be different from what it is.

When at last we have a simple, compelling, mathematically consistent theory of gravitation
and other forces that explains all the apparently arbitrary features of the standard model, it
will be a good bet that this theory really is final. Our description of nature has become
increasingly simple. More and more is being explained by fewer and fewer fundamental
principles. But simplicity can't increase without limit. It seems likely that the next major
theory that we settle on will be so simple that no further simplification would be possible.

The discovery of a final theory is not going to help us cure cancer or understand
consciousness, however. We probably already know all the fundamental physics we need
for these tasks. The branch of science in which a final theory is likely to have its greatest
impact is cosmology. We have pretty good confidence in the ability of the standard model
to trace the present expansion of the universe back to about a billionth of a second after its
start.

But when we try to understand what happened earlier than that, we run into the limitations
of the model, especially its silence on the behavior of gravitation at very short distances.
The final theory will let us answer the deepest questions of cosmology: Was there a
beginning to the present expansion of the universe? What determined the conditions at the
beginning? And is what we call our universe, the expanding cloud of matter and radiation
extending billions of light-years in all directions, really all there is, or is it only one part of a
much larger universe in which the expansion we see is just a local episode?

The discovery of a final theory could have a cultural influence as well, one comparable to
what was felt at the birth of modern science. It has been said that the spread of the scientific
spirit in the 17th and 18th centuries was one of the things that stopped the burning of
witches. Learning how the universe is governed by the impersonal principles of a final
theory may not end mankind's persistent superstitions, but at least it will leave them a little
less room.

Steven Weinberg is a Nobel laureate in physics at the University of Texas. His books
include Dreams of a Final Theory

Will We Discover Another Universe?
Physicists, exploring science's most arcane equations, have discovered what may be
By MICHAEL D. LEMONICK

Captains Kirk and Picard stumbled into them all the time. Alice found one beyond the
looking glass. The children in C.S. Lewis' Chronicles of Narnia entered theirs by way of a
musty old wardrobe. Considering their usefulness as a plot device, it's hardly surprising that
science fiction and fantasy literature are filled with alternate universes of one kind or
another. What is surprising, though, is that mainstream physicists have stumbled onto their
own alternate universes, hidden amid the complexities of science's most arcane equations.

Nobody knows what form these parallel worlds might take, and it's far from clear that we
could detect their existence, let alone step through a mirror or a space warp for a visit. But
hints that ours is just one of many universes keep cropping up in all sorts of different
theoriesand in ways that can seem far stranger than fiction.

The first credible suggestion that alternate universes might exist came in the early 1950s
when a young physics graduate student named Hugh Everett was toying with some of the
more bizarre implications of quantum mechanics. That theory, accepted by all serious
physicists, says that the motions of atoms and subatomic particles can never be predicted
with certainty; you can tell only where, say, an electron will probably be a millisecond from
now. It could quite possibly end up somewhere else.

Precisely what that fundamental uncertainty tells us about the basic nature of the subatomic
world is a question theorists have been wrestling with for decades. The great Danish
physicist Niels Bohr, for example, believed that before you pinned a particle down by
measuring it, the particle was literally in several places at once. The act of measurement, he
suggested, forced the particle to choose one location over all the others.

But Everett had another idea: when you locate the electron, he argued, the world splits into
multiple universes. In each one, the electron has a different positionand all these many
worlds, each equally real, go on to have their own futures. In this so-called many-worlds
interpretation of quantum mechanics, the universe is incredibly prolific, since each particle
in the cosmos produces a multitude of new universes in each instantand in the next
instant, every one of these new universes fragments again. Yet plenty of physicists consider
this to be a perfectly valid idea. And if it's correct, the number of universes evolving in
parallel is far greater than we could ever count.

It's all purely theoretical, though, since there's no conceivable way to make contact with
even one of these alternate universes. So while each of us may spawn an uncountable
number of parallel selves as the particles within us split and re-split, the chance of tapping
into our other histories is precisely zeroand so, alas, is the chance of figuring out whether
this interpretation of quantum mechanics is correct.

Things don't look much more certain in the second type of alternate universe, which comes
not from quantum mechanics but from the other great physics revolution of the 20th
century, Einstein's general theory of relativity. According to Einstein, objects with
extremely large mass or high density stretch the fabric of space-time. Find something
whose density approaches infinitya black hole, for exampleand that stretch can
become a tear.

This rip in space-time, better known as a wormhole, could in theory serve as a shortcut to a
distant part of the universe (characters on Star Trek: Deep Space Nine use wormholes the
way New Yorkers use subways). But according to an idea proposed in the 1980s by
Stephen Hawking, it could also lead out of our cosmos altogether, creating a "baby
universe" that would then expand and grow, forming its own self-contained branch of
space-time.

Will We Figure Out How Life Began?
We may determine what started it allbut that might not tell us whether life was inevitable
or just a lucky accident
By STEPHEN JAY GOULD

A deep problemphilosophical rather than factualstymies all our attempts to define the
nature of life. Scientific generalizations require replication, the demonstration that a given
set of forces and substances will yield the same result when brought together under the
same conditions. Ideally, we test for replication with time-honored procedures that
scientists call controlled experimentsartificially simplified situations manipulated by
human observers to guarantee (within the best of our ability) an exact repetition of all
timings, forces and substances. If we achieve the same result in each of several replications,
we then gain confidence that we may be witnessing a predictable generality based upon a
law of nature. This search for replicates underlies the effortsand partial successesof
scientists to synthesize living matter from the presumed chemical constituents in the
"primordial soup" of the earth's original oceans. Can we create some rudimentary forms of
life by exposing these constituents to known sources of energy (lightning from electrical
storms, heat from oceanic vents, for example) under the presumed conditions of the earth's
early atmosphere and surface?

In this context of accepted scientific procedures, single occurrences present a knotty
problem. Their "truth" cannot be denied, but how can we use their existence to assert any
generality rather than an explanation for a singular circumstance? For specific events of
historythe rise, domination and extinction of dinosaurs, for examplewe seek no such
generality, and specific narrations for bounded events supply the explanations we seek.
Thus a particular asteroid, striking the earth 65 million years ago and leaving evidence of
its impact off the Yucatan Peninsula, probably triggered a global extinction that sealed the
fate of dinosaurs and many other creatures. In developing such evidence, we have
explained a unique historical event, but we have not discovered a general law of nature.

But when we ask questions about the nature of lifewhen we wonder, for example, how
common life may be in the universe or inquire whether any potential life on other worlds
will look like the life we already know on Earththen we are seeking to understand
general principles about the essential character of the natural universe and not simply to
explicate a particular set of historical events. To formulate such general principles, as I
argued above, we need replicates, either made in our laboratories or found elsewhere in the
universe.

The life that we know, however wondrous in extent and variety, all proceedsor so our
best inferences tell usfrom one single experiment. The biochemical features underlying
this amazing variety and the coherent fossil record of 3.5 billion years (implying a single
branching tree of earthly life with a common trunk) indicate that every living thing on
Earth, from the tiniest bacterium on the ocean floor to the highest albatross that ever flew in
the sky, arose as the magnificently diversified evolutionary outcome of one single
experiment performed by nature, one origin of life in the early history of one particular
planet.

Thus we can define the life we know by specifying its common features and properties. But
if we wish to move from this knowledge to statements about the general nature of any
potential life in the universe, we remain stymied in two key ways.

First, we can make no reasoned conjecture about the frequency, or even the existence, of
life elsewhere in the universe. As an optimist by temperament and as a betting man, I allow
that certain features of the natural world would lead me to place my chips on yes if
someone forced me to wager. But I also know the difference between a pure flutter based
on hope and a smart play based on genuine probabilities.

The fact that fossilized life of the simplest bacterial grade appears in some of the most
ancient rocks on Earth suggests that an origin of life in these conditions may be nearly
inevitable, since incredibly improbable events should not occur so quickly. But my
skeptical side retorts that good luck in one try proves nothing. I may win the lottery the first
time I buy a ticket, and I might flip 10 heads in a row on my first sequence of tosses.
I might also argue that since our immense universe contains gazillions of galaxies filled
with appropriate stars and planets, and since life did emerge on the one and only planet we
really know, how can we deny that a sizable proportion of these other planets must also
contain life? Yet a logical fallacy dooms this common argument because either alternative
can be reconciled with the positive result that I must obtain for the only place I can
sampleour Earth. For if all appropriate planets generate some form of life, then I should
not be surprised that I have found living things on my own world. But if life really exists on
my planet alone, then I must still record a positive result from this only possible sample.
After all, I knew the answer for the earth before I ever formulated my scheme for sampling.

Unfortunately, we are stymied by the fact that our knowledge about life must, at least for
now, be limited to studies of a single experiment on Earth. All earthly life shares a
remarkably complex set of biochemical features, but does this commonality record the only
conceivable building blocks for any entity that we would call "alive"? Or do all earthly
creatures share these features only because we have inherited these properties from a
common ancestor that used one configuration among a million alternatives unknown to us
but quite conceivable and workable? Indeed, would we, in our carbon-based parochialism,
even recognize otherworldly forms of lifepulsating sheets of silica, perhapswell
beyond our ken?

The architect of this conceptual prison built only two doors leading to a solution, with the
path to each door marked by the same sign: FIND A REPLICATE! On one path, we make
the replicates ourselves by gaining such an improved understanding of the nature of things
that we can define the set of all conceivable living forms and then test their properties by
chemical synthesis in our laboratory.

As a natural historian at heart, however, I confess my strong preference for the second path
of exploration: a search for possible natural occurrences elsewhere. This Columbian path
has served us so well before, and nature's products do tend to outshine our own poor
workmanship by manifesting things undreamed of in all our philosophy. So let us seek
nature's own replicateon Mars or a few other potential places in our solar system, if we
really luck out (and are willing to content ourselves with simple things at bacterial grade
and unfit for mutual conversation); or elsewhere, despite daunting distances (beyond any
possibility for two-way conversation during human lifetimes) but promisingin the most
exciting and improbable long shot in all human historya potential insight soaring well
beyond our meager powers of imagination.

Stephen Jay Gould is a professor at Harvard and New York University and author of
numerous books, including Rocks of Ages

Will We Control The Weather?
Actually, we already do, and the results have not been good. The trick will be to devise a
way to undo the damage
By J. MADELEINE NASH

A tropical storm quickly takes shape over the Atlantic Ocean, a furiously whirling dervish
with a skirt of thunderstorms. But just as quickly the storm is challenged by dozens of
National Weather Service planes, which sally forth from East Coast airstrips like fighters
on the tail of an enemy bomber. Attacking from above and below, the planes fire off a
barrage of esoteric weapons that sap the strength of the raging winds in the developing eye
wall.

Ammunition expended, the lead pilot flashes a thumbs-up, confident that once again she
and her team of veteran storm chasers have prevented a hurricane from forming.

Could something like this really happen? Probably not. Such fanciful scenarios are period
pieces. They belong to the 1950s and '60s, when scientists harbored an almost naive faith in
the ability of modern technology to end droughts, banish hail and improve meteorological
conditions in countless other ways. At one point, pioneering chemist Irving Langmuir
suggested that it would prove easier to change the weather to our liking than to predict its
duplicitous twists and turns. The great mathematician John von Neumann even calculated
what mounting an effective weather-modification effort would cost the U.S.about as
much as building the railroads, he figured, and worth incalculably more.

At the start of the 21st century, alas, all that remains of these happy visions are a few
scattered cloud-seeding programs, whose modest successes, while real, have proved less
than earthshaking. In fact, yesterday's sunny hopes that we could somehow change the
weather for the better have given way to the gloomy knowledge that we are only making
things worse. It is now clear that what the world's cleverest scientists could not achieve by
design, ordinary people are on the verge of accomplishing by accident. Human beings not
only have the ability to alter weather patterns on local, regional and global scales, but they
are already doing itin ways that are potentially catastrophic.

Consider the billions of tons of carbon dioxide that are emitted every year in the course of
our daily life. Driving a car, switching on a light, working in a factory, fertilizing a field all
contribute to the atmosphere's growing burden of heat-trapping gases. Unless we start to
control emissions of CO2 and similar compounds, global mean temperatures will probably
rise somewhere between 2[degrees]F and 7[degrees]F by the end of the next century; even
the low end of that spectrum could set the stage for a lot of meteorological mischief.
Among other things, the higher the temperature, the more rapidly moisture can evaporate
from the earth's surface and condense as rain droplets in clouds, substantially increasing the
risk of both drought and torrential rain. There could also be a rise in the number of severe
storms, such as the tornado-spawning monsters that hit Texas last week.

Human activity is modifying precipitation in other dramatic ways. Satellite images show
that industrial aerosolssulfuric acid and the likeemitted by steel mills, oil refineries and
power plants are suppressing rainfall downwind of major industrial centers. In Australia,
Canada and Turkey, according to one study, these pollution patterns perfectly coincide with
corridors within which precipitation is virtually nil. Reason: the aerosols interfere with the
mechanism by which the water vapor in clouds condenses and grows into raindrops big
enough to reach the ground.

This creates an additional conundrum. Because a polluted cloud does not rain itself out,
notes University of Colorado atmospheric scientist Brian Toon, it tends to grow larger and
last longer, providing a shiny white surface that bounces sunlight out to space. Indeed, one
reason the earth has not yet warmed up as much as many anticipated may be due to the tug-
of-war between industrial aerosols like sulfuric acid (which reflect heat) and greenhouse
gases like carbon dioxide (which trap it). Ironically, then, the cost of reducing one kind of
pollution may come at the price of intensifying the effects of the other.
Deforestation has a similarly broad range of impact. One thing trees do is lock up a lot of
carbon in their woody tissues, thereby preventing it from escaping into the atmosphere.
Trees are also important recyclers of moisture to the atmosphere. In some parts of the
Amazon basin, deforestation has reached the point where it is altering precipitation
patterns. This is because so much of the moisture entrained by clouds comes from the
canopy of the forest below; as large tracts of trees disappear, so do portions of the aqueous
reservoir that feeds the local rainmaking machine.

Shrubs, grasses and other vegetative covers act in much the same way, trapping water,
feeding moisture into the atmosphere and providing shade that shields the surface of the
land from the drying rays of the sun. Large-scale land-clearing efforts under way around
the world wipe all that out. The ongoing development of South Florida, for instance, has
filled in and paved over much of the Everglades wetlands, which have long served as an
important source of atmospheric moisture. As a consequence, says Colorado State
University atmospheric scientist Roger Pielke Sr., South Florida in July and August has
become significantly dryer and hotter than it would have been a century ago under the same
set of climatic conditions.

To complicate matters further, we are changing the landscape in ways that increase our
exposure to meteorological extremes, so that even if weather patterns in coming decades
were to turn out to be identical to those of the past century, the damage inflicted would be
far worse. To appreciate what happens when vegetative cover is removed, one need look no
further than the 1930s Dust Bowl in the U.S. and the 1970s famine in Africa's Sahel. In
both cases, a meteorological drought was exacerbated by agricultural and pastoral practices
that stripped land bare, exposing it to the not so tender mercies of sun and wind.

Removal of vegetative cover also worsens the flooding that occurs during periods of
torrential rain. Riverine forests serve as sponges that soak up excess water, preventing it
from rushing all at once into rivers and tributaries. In similar fashion, estuarine wetlands
and mangrove forests help shield human settlements from the storm surges that accompany
tropical cyclones and hurricanes. Biologists estimate that 50% of the world's mangrove
forests have already been replaced by everything from shantytowns to cement plants and
shrimp farms. Stir in the expectation that rising temperatures will trigger a rise in sea level,
and you have a recipe for unprecedented disaster.

Scientists are just beginning to disentangle the myriad levels on which human beings and
the natural climate system interact, which only increases the potential for surprise. For
example, we now realize that not all the aerosols we are pumping into the atmosphere exert
a cooling effect. A notable exception is soot, which is produced by wood fires and
incomplete industrial combustion. Because of its dark color, soot absorbs solar energy
rather than reflecting it. So when a recent scientific excursion to the Indian Ocean
established that big soot clouds were circulating through the atmosphere, a number of
scientists speculated that their presence might be raising sea-surface temperatures,
potentially affecting the strength of the monsoon.

The monsoon is not the only climate cycle that human activity could alter. Atmospheric
scientist John M. Wallace of the University of Washington believes that rising
concentrations of greenhouse gases are already beginning to have an impact on another
important cycle, known as the North Atlantic or Arctic Oscillation. In this case it's not the
warming these gases create in the lower atmosphere that is key, but the cooling they cause
in the stratosphere, where molecules of carbon dioxide and the like emit heat to space rather
than trapping it in the upper atmosphere. This stratospheric cooling, Wallace and others
speculate, may have biased prevailing wind patterns in ways that favor a wintertime influx
of mild marine air into Northernas opposed to SouthernEurope.

Is Wallace right about this? No one yet knows. We are tampering with systems that are so
complex that scientists are struggling to understand them. Climatologist Tom Wigley of the
National Center for Atmospheric Research, for one, fervently believes the answer to our
problems lies not just in improved knowledge of the climate system but in technological
advances that could counterand perhaps reversepresent trends. In other words, the
farfetched dreams that prominent scientists like Von Neumann once harbored have not
died. Rather they have been transformed and, in the process, become more urgent.

Will Anyone Ever Run a Three Minute Mile?
Athletes are getting better and better, but is there a limit to human performance? If so, when
will we reach it?
By JEFFREY KLUGER

For a species that prides itself on its athletic prowess, human beings are a pretty poky
group. Lions can sprint at up to 50 m.p.h. when they're chasing down prey. Cheetahs move
even faster, flooring it to a sizzling 70 m.p.h. But most humanswith our willowy spines
and awkward, upright gaitwould have trouble cracking 25 m.p.h. with a tail wind, a flat
track and a good pair of shoes.

Nonetheless, the human race is undeniably becoming a faster race. Since the beginning of
the past century, track-and-field records have fallen in everything from sprints to miles to
marathons. The performance arc is clearly rising, but no one knows how much higher it
could climb. At some point, even the best-trained body simply has to up and quit. The
question is, just where is that point, and is it possible for athletes, trainers and scientists to
push it higher?

By almost any standard, the best yardstick for measuring how steadilyif slowlyathletic
performance has improved is the mile run. In 1900 the record for the mile was a
comparatively sleepy 4 min. 12 sec. It wasn't until 1954 that Roger Bannister of Britain
cracked the 4-min. mark, coming in six-tenths of a second under the charmed figure. In the
half-century since, uncounted thousands of mile heats have been run, yet less than 17
additional seconds have been shaved off Bannister's recordabout a third of a second per
year.

The credit for the improvement goes mostly to better training and equipment, but shoes and
diet can get only so good before theyand the runnershit a wall. "It's conceivable the
record could be 3 min. 30 sec. in 50 years," says American Olympic miler Steve Holman.
"But bringing it down much more is a long way off." Scientists agree. In 1987 researchers
at Canada's McGill University developed a mathematical model that predicted a world mile
record of precisely 3 min. 29.84 sec. in 2040.

All bets are off, however, if genetic engineers find a way to intervene. What slows the
human body down is less the architecture of its skeleton than the chemistry of its muscles.
The key to speed is making muscles contract faster, and the key to that is gassing them up
with as much oxygen as possible. "About 80% of the energy used to run a mile," explains
physiologist Peter Weyand of Harvard University, "comes directly from oxygen."

Muscles process oxygen through cellular components known as the mitochondria. Human
mitochondria take up only about 3% of the space in a cell. But in animals that run the
fastest, mitochondria are far bigger; the mitochondria of an antelopean animal that easily
runs a 2-min. mile and does so in wispy mountain air 7,000 ft. upare three times larger
than ours. "If you could genetically engineer humans to have more mitochondria, bigger
hearts and more blood vessels," says Weyand, "we might run about 40 m.p.h."

That kind of redesign isn't possible today, but it may not be far off. Scientists at the
University of Pennsylvania Medical School have already developed a way to equip a
benign virus with genetic material that codes for muscle growth, and they have injected the
virus into mice. The animals quickly bulk up by as much as 20%, becoming not just bigger
but stronger. The researchers have developed other techniques to block cellular signals that
would otherwise cause muscles to atrophy, allowing the new mass to be retained even
without exercise.
"The purpose of this research is not to take elite athletes to a new level," says physiologist
Lee Sweeney, leader of the Pennsylvania team, "but to help them come back from injury
faster." Nonetheless, similar techniques could theoretically be used for mitochondrial and
cardiac redesign.

Runners are not the only athletes for whom souped-up genetics could mean souped-up
performance. If baseball players could increase their bat speed, home runs that barely clear
the fence could fly hundreds of feet farther. Boost leg strength in a football kicker, and a
60-yd. field goal could sail for 80 or 90 yds.

But would this super performance be good for the suddenly superbody? Hang too much
muscle on the skeletal system or place too much strain on the cardiopulmonary system, and
something's bound to give. Racehorseswhich are bred and trained for speeds they were
not designed to runsuffer all manner of physical ills, from fractured legs to bleeding
lungs, as a result of overuse. "You don't have these problems in antelopes and cheetahs, but
in horses, we've apparently pushed to the limit," says Weyand. "If a human ran a 2-min.
mile, you might see the same thing."

Of course, even if genetically manipulated athletes did survive their training and thrive in
their sport, it's hard to say what they'd have won. A well-trained runner or ballplayer is one
thing; a well-manufactured one is something elseless a product of skill and will than
tricky genetics and smart pharmacology. "Maybe one day athletes will call up a
prescription for bigger muscles or whatever," says Holman. "But will it still be called a
sport?" You don't have to be an athletic purist to know the answer.

How Will The Universe End? (With A Bang or A Whimper?)
The fate of the cosmos is not fiery cataclysm, say the latest telescopic observations, but a
gradual descent into eternal, frigid darkness
By TIMOTHY FERRIS

We Earthlings are newcomers to cosmology, the study of the universe as a whole, and there
is no cosmological question about which we have more to learn than the riddle of where it's
all ultimately headed. But we have glimpsed at least a few clues to cosmic destiny, some of
them hopeful and others bleak.

The good news is that we're not going to be evicted. The universe is likely to remain
hospitable to life for at least an additional 100 billion years. That's 20 times as long as the
earth has existed, and 5 million times as long as Homo sapiens has lasted so far. If we're not
around to shoot off fireworks on New Year's Eve of the year 100,000,000,000, it won't be
the universe's fault.

The bad news is that nothing lasts forever. The universe may not disappear, but as time
goes by it may get increasingly uncomfortable, and eventually become unlivable.
Calculating how and when this will happen is a genuinely dismal science, but not without a
certain grim fascination. The classic Big Bang theory, refined over the decades since the
astronomer Edwin Hubble discovered the expanding universe in 1929, suggests that cosmic
destiny will be decided through a tug-of-war between two opposing forces. One is the
expansion of space, which for more than 10 billion years has been carrying galaxies ever
farther apart from one another. The other is the mutual gravitational force exerted by those
galaxies and all the other stuff in the universe; it acts as a brake, slowing down the
expansion rate.

In this simple picture, if the gravitational force is strong enough to bring expansion to a
halt, the universe is destined to collapse, ultimately dissolving into a fireballa Big Crunch
that amounts to the Big Bang run in reverse. If it's not, and expansion wins out, then the
universe will eventually grow unpleasantly dark and cold. Stars produce energy by fusing
light atomic nuclei, mainly hydrogen and helium, into heavier ones. When the hydrogen
and helium run low, old stars will sputter out without any new ones to take their place, and
the universe will gradually fade to black. Such were the gloomy alternatives that Robert
Frost wrote about after being briefed on the theory of the cosmic endgame by the
astronomer Harlow Shapley:
Some say the world will end in fire,
Some say in ice...
I think I know enough of hate
To know that for destruction ice
Is also great
And would suffice.
Either fate looks like curtains for life. If the end comes in fire, the Big Crunch would melt
down everything, even subatomic particles. If, on the other hand, the universe winds up
cold and dark, life might hang on for a long timesay, by extracting gravitational energy
from black holes. But trying to make a living once everything has subsided to pretty much
the same temperaturea tad above absolute zerois like trying to run a water mill on a
dead-still pond.

Our ultimate fate, though, remains in doubt, in part because the jury is still out on whether
the expansion or gravity will triumph in the end. Most observations point toward the
former, but many uncertainties persist. One is the galling "dark matter" issue. Studies of
how galaxies are moving around indicate that there's lots of extra gravity out there,
suggesting that the stars and nebulae we can see constitute only 1% to 10% of the matter in
the universe. The rest is invisible; it emits no light. Nobody yet knows what this dark
matter is. One possibility is that it's made of WIMPweakly interacting massive particles.
Until the dark matter is identified and tallied, predicting the future of the universe on the
basis of what we can see will be as uncertain as trying to predict an election by polling a
few golfers down at the country club.
Meanwhile, ironists and fatalists draw austere satisfaction from the fire-or-ice scenario,
which reflects the quintessential human perception that nobody gets out of life alive. And
that's just what makes me suspicious of it. The great lesson of scientific cosmology is that
the universe does not usually conform to our time-honored ways of thinkingthat to
understand it, we need to think in new ways. Integral to modern cosmology are mind-
bending 20th century concepts like Einstein's curved space, Heisenberg's uncertainty
principle and the realization that exotic subatomic particles sail through our bodies by the
trillions without laying a glove on us, and I see no reason to suppose that doors won't open
onto even stranger notions in the century to come. So perhaps we can glimpse a few shafts
of light, shining under doors as yet unopened, that promise to refine our predictions of
cosmic destiny.

One unknown has to do with inflationthe theory that the universe began as a bubble of
empty space that initially expanded at a velocity much faster than that of light (see "Will
We Discover Another Universe?", in this issue). Cosmologists take inflation seriously
because it resolves problems that bedeviled older versions of the Big Bang, but inflation
also has implications for the study of cosmic destiny. Among them is that the force that
drove the inflationary spasm, sometimes tagged with the Greek letter lambda after its
designation in Einstein's general-relativity equations, might not have subsided altogether
back when the inflationary hiccup ended. Instead, it might still be there, lurking in empty
space and urging expansion along, like an usher politely shooing playgoers back into the
theater at intermission's end. Some observations of exploding stars in distant galaxies
suggest the presence of just such an ongoing inflationary impulse. If so, the tug-of-war over
the future of the universe involves not only expansion and gravitational braking but also the
subtle turbocharging of lingering inflation, which acts to keep the universe expanding
indefinitely.

Perhaps the most intriguing unknown, however, concerns the cosmic role played by
intelligent life itself. As the physicist Freeman Dyson notes, "It is impossible to calculate in
detail the long-range future of the universe without including the effects of life and
intelligence." Much of the earth has been transformed, for better and worse, by the presence
here of an intelligent species capable of manipulating its environment for its own benefit.

Similarly, advanced civilizations in the far future might be able to melt down stars and even
entire galaxies to make gigantic campfires, or otherwise tilt the long-term odds in their
favor. Life in the waning cosmic twilight might be jejune, but it could last a long time.
Consider the marshaled resources of all the natural and artificial intelligences in the
observable universe over the next, say, trillion years. Which would you bet on to prevail
that level of smarts or a claim, based on 19th century thermodynamics, that they're
doomed?

So let's stay tuned, heeding the words of Einstein, who wrote to a child anxious about the
fate of the world, "As for the question of the end of it I advise: Wait and see!"

Timothy Ferris is the author of The Whole Shebang: A State-of-the-Universe(s) Report and
the PBS special Life Beyond Earth

Will There Be Anything Left To Discover?
Is the great era of scientific inquiry over? Have all the big theories been formulated and
important discoveries madeleaving future scientists nothing but fine tuning? Or is the
real fun about to begin?
By JOHN HORGAN and PAUL HOFFMAN

A spirited debate, conducted via e-mail, between two acclaimed science journalists: John
Horgan, author of the controversial book The End of Science, and Paul Hoffman, former
editor of Discover magazine and past president of the Encyclopaedia Britannica.

HOFFMAN: The past decade has brought a spate of books sounding the death knell for a
host of subjects. Francis Fukuyama served up The End of History and David Lindley The
End of Physics. But your more sweeping work The End of Science (1997) attracted a lot
more attention and controversyand with good reason. The idea that science may have had
its runthat we've discovered all we can realistically expect to discover and that anything
we come up with in the future will be pretty much small-bore stuffleft people either
intrigued or outraged. With today's seemingly frenetic pace of scientific discovery,
however, how can you say that the whole enterprise is coming to an end? The scientists I
know, far from preparing for the undertaker, are ebullient about the future of their field.

HORGAN: Sure, scientists are keeping busy, but what are they actually accomplishing? My
argument is that science in its grandest sensethe attempt to comprehend the universe and
our place in ithas entered an era of diminishing returns. Scientists will continue making
incremental advances, but they will never achieve their most ambitious goals, such as
understanding the origin of the universe, of life and of human consciousness. Most people
find this prediction hard to believe, because scientists and journalists breathlessly hype each
new breakthrough, whether genuine or spurious, and ignore all the areas in which science
makes little or no progress. The human mind, in particular, remains as mysterious as ever.
Some prominent mind scientists, including [Time Visions contributor] Steven Pinker, have
reluctantly conceded that consciousness might be scientifically intractable. Paul, you should
jump on the end-of-science bandwagon before it gets too crowded.

HOFFMAN: Don't save a seat for me quite yet, John. Take the human mind. I agree that we
are not close to an understanding of consciousness, despite the efforts of some of the best
minds in science. And perhaps you're even right that we may never understand it. But what
is the evidence for your position? You've criticized scientists for having faitha dirty word
in the scientific lexiconthat our era of explosive progress will continue unabated. Isn't it
at least as much a leap to think that the progress will abruptly endparticularly since the
trajectory of discoveries so far suggests just the opposite, that supposedly unanswerable
questions eventually do get answered?

HORGAN: My faith is based on common sense, Paul, and on science itself. As science
advances, it imposes limits on its own power. Relativity theory prohibits faster-than-light
travel or communication. Quantum mechanics and chaos theory constrain our predictive
abilities. Science's limits are glaringly obvious in particle physics, which, as Steven
Weinberg describes [in the Visions issue], seeks a "theory of everything" that will explain
the origin of matter, energy and even space and time. The leading theory postulates that
reality arises from infinitesimal "strings" wriggling in a hyperspace of 10 (or more)
dimensions. Unfortunately, these hypothetical strings are so small that it would take a
particle accelerator the size of the Milky Way to detect them! I am not alone in fearing that
string theorists are not really practicing science anymore; one leading physicist has derided
string theory as "medieval theology." Paul, here is persuasive evidence of science's plight.

HOFFMAN: Yes, but who is to say that all these scientific theories won't ultimately be
replaced by ones with greater explanatory power? Galileo and Newton thought their laws of
motion were the cat's pajamas, explaining everything under the sun and many things
beyond, but 2 1/2 centuries later a Swiss patent clerk toppled their notions of space and
time. Obviously, Galileo and Newton did not foresee what Einstein found. I think it's
ahistorical to assert that in the future there will never be an Einstein of, say, the mind who
will be able to pull together a theory of consciousness. And even if it's true that some of the
big unanswered questions of science may never be answered, a lot of new and exciting
science could still come from overturning truths that we now take for granted. Robert
Gallo, the AIDS researcher, once told me that at the end of the 1970s, he was at a
conference where a prominent scientist confidently summed up the truths of biomedicine
such truths as: epidemic diseases are things of the past, at least in so-called developed
nations; a widespread outbreak of infectious disease is impossible unless the microbe is
casually transmitted; the kind of virus found in animals known as the retrovirus doesn't
exist in man; and no virus causes cancer in humans. By the end of the 1980s, these four
truisms had hit the dustbin. Or take a more recent example: the newfound plasticity of the
human brain. Until a year and half ago, it was a dogma taught in every medical school in
the country that the adult human brain is rigid, that its nerve cells can never regenerate.
Now we know our brains do have the ability to generate new cellsa discovery that may
not only open up a new understanding of the brain but also lead to novel treatments for a
host of brain disorders.

HORGAN: Here's the big question we're dancing around: Can we keep discovering
profound new truths about reality forever, or is the process finite? You seem to assume that
because science has advanced so rapidly over the past few centuries, it will continue to do
so, possibly forever. But this view is, to use your word, ahistorical, based on faulty
inductive logic. In fact, inductive logic suggests that the modern era of explosive scientific
progress might be an anomaly, a product of a singular convergence of social, intellectual
and political factors. If you accept this, then the only question is when, not if, science will
reach its limits. The American historian Henry Adams observed almost a century ago that
science accelerates through a positive-feedback effect. Knowledge begets more knowledge;
power begets more power. This so-called acceleration principle has an intriguing corollary:
If science has limits, then it might be moving at maximum speed just before it hits the wall.

HOFFMAN: Of course, I accept that science has limitsand may even be up against them
in some fields. But I believe there is still room for science, even on its grandest scale, that
awe-inspiring discoveries will continue to be made over this millennium. The
mathematician Ronald Graham once said, "Our brains have evolved to get us out of the
rain, find where the berries are and keep us from getting killed. Our brains did not evolve to
help us grasp really large numbers or to look at things in a hundred thousand dimensions."
Sounds reasonable, except when you consider that it could be similarly said that our brains
didn't evolve to invent computers, design spaceships, play chess and compose symphonies.
John, I think we'll continue to be surprised by what the brains of scientists turn up.

HORGAN: I hope you're right, Paul. I became a science writer because I believe science is
humanity's most meaningful creation. We are here to figure out why we are here. The
thought that this grand adventure of discovery might end haunts me. What would it be like
to live in a world without the possibility of further revelations as profound as evolution or
quantum mechanics? Not everyone finds this prospect disturbing. The science editor of the
Economist once pointed out to me that if science does end, we will still have sex and beer.
Maybe that's the right attitude, but there aren't any Nobels in it. No matter how far science
does or doesn't advance, however, there's one wild card in even the most pessimistic
scenario. If we encounter extraterrestrial lifeand especially life intelligent enough to have
developed its own sciencethen all bets are off.


II. Convivencia
1. Seguir necesitando la mujer del hombre?
Parece que en trminos de reproduccin los hombres ya no sern necesarios, y en lugar de
apostar a riesgosos matrimonios todo parece indicar que existirn contratos de corto plazo
renovables si las relaciones marchan bien.

2. Alguna mujer llegar a ser Papa?
Todo parece indicar que NO, pero la presencia femenina ser mayor en todas las religiones
del mundo. Incluyendo los altos niveles.

3. Importarn los polticos?
Todo parece indicar que no: Apata para el voto, partidos polticos tambaleantes, gobiernos
que se hunden. Sin embargo el voluntariado crecer auspiciado por la iglesia, de esta
manera los individuos se inspirarn de la misma forma en que alguna vez los polticos lo
hicieron: como una religin.

4. Bajaremos el esfuerzo por pensar?
Probablemente NO, pero ya todos hemos odo del refrigerador que vigilar y nos informar
de las condiciones de los alimentos. De cualquier manera debe prepararse para una vida
inmersa en la computadora, desayunaremos, trabajaremos y vacacionaremos con la ayuda
de la computadora

5. Seguiremos asistiendo a los estadios deportivos?
Cada vez menos, los espectculos deportivos empezarn a verse en salas especiales
ubicadas en plazas comerciales, el ambiente del estadio ser reproducido y usted podr
disfrutar cmodamente del superbowl CXXXIII, pero aorando aquellos das en que se
asista a los estadios.

6. Qu haremos los sbados por la noche?
No importa los avances tecnolgicos que ocurran, la esencia del entretenimiento no
cambiar. La video-casetera no ha detenido la asistencia de la gente a los cinemas, los
avances tecnolgicos tan solo se sumarn al men de entretenimientos disponibles.

7. Quines sern la prxima elite?
Sern aquellos que en el mundo virtual lograron crear un negocio y hacer un buen de
billetes

8. Qu nos har rer?
No est muy claro que ser lo gracioso del futuro. La vida del individuo ser tan apresurada
que lo gracioso tendr que ser verdaderamente gracioso para ganarse la atencin.

9. Qu vamos a vestir?
Los diseadores seguirn creando y el guardarropa estar repleto de prendas de moda. En el
archivo de power point la propuesta de cinco diseadores.

10. Existirn los teenagers
Probablemente NO, cuando menos en la forma en que los conocemos. Los adolescentes
estn creciendo demasiado rpido y llegando a adultos antes que en el pasado.

11. Tendremos privaca?
A pesar de los avances electrnicos SI. Nuestras vidas sern un libro que cualquiera querr
leer, pero todos seguiremos pensando que la invasin de la privaca es inmoral y esa ser la
mejor defensa.

12. Cmo se ver nuestro horizonte?
Mucho muy parecido a lo actual, la arquitectura mantendr las lneas y espacios que hoy
apreciamos en el horizonte

13. Cmo sern nuestras casas?
Altamente computarizadas

Will women still need men?
We're only a few in vitro developments away from gender independence. A fantastical look
at the implications--and a user's guide to Guy Land and Gal Land
By BARBARA EHRENREICH

This could be the century when the sexes go their separate ways. Sure, we've hung in there
together for about a thousand millenniums so far--through hunting-gathering, agriculture
and heavy industry--but what choice did we have? For most of human existence, if you
wanted to make a living, raise children or even have a roaring good time now and then, you
had to get the cooperation of the other sex.

What's new about the future, and potentially more challenging to our species than Martian
colonization or silicon brain implants, is that the partnership between the sexes is becoming
entirely voluntary. We can decide to stick together--or we can finally say, "Sayonara, other
sex!" For the first time in human history and prehistory combined, the choice will be ours.

I predict three possible scenarios, starting with the Big Divorce. Somewhere around 2025,
people will pick a gender equivalent of the Mason-Dixon Line and sort themselves out
accordingly. In Guy Land the men will be free to spend their evenings staging belching
contests and watching old Howard Stern tapes. In Gal Land the women will all be fat and
happy, and no one will bother to shave her legs. Aside from a few initial border clashes, the
separation will for the most part be amicable. At least the "battle of the sexes," insofar as
anyone can remember it, will be removed from the kitchens and bedrooms of America and
into the U.N.

And why not? If the monosexual way of life were counter to human nature, men wouldn't
have spent so much of the past millennium dodging women by enlisting in armies,
monasteries and all-male guilds and professions. Up until the past half-century, women
only fantasized about their version of the same: a utopia like the one described by 19th
century feminist Charlotte Perkins Gilman, where women would lead placidly sexless lives
and reproduce by parthenogenesis. But a real separation began to look feasible about 50
years ago. With the invention of TV dinners and drip-dry shirts, for the first time the
average man became capable of feeding and dressing himself. Sensing their increasing
dispensability on the home front, and tired of picking up dropped socks, women rushed into
the work force. They haven't achieved full economic independence by any means (women
still earn only 75% of what men do), but more and more of them are realizing that ancient
female dream--a room, or better yet, a condo of their own.

The truly species-shaking change is coming from the new technologies of reproduction. Up
until now, if you wanted to reproduce, you not only had to fraternize with a member of the
other sex for at least a few minutes, but you also ran a 50% risk that any resulting baby
would turn out be a member of the foreign sex. No more. Thanks to in vitro fertilization, we
can have babies without having sex. And with the latest techniques of sex selection, we can
have babies of whatever sex we want.

Obviously women, with their built-in baby incubators, will have the advantage in a
monosexual future. They just have to pack up a good supply of frozen semen, a truckload
of turkey basters and go their own way. But men will be catching up. For one thing, until
now, frozen-and-thawed ova have been tricky to fertilize because their outer membrane
gets too hard. But a new technique called intracytoplasmic sperm injection makes frozen
ova fully fertilizable, and so now Guy Land can have its ovum banks. As for the incubation
problem, a few years ago feminist writer Gena Corea offered the seemingly paranoid
suggestion that men might eventually keep just a few women around in "reproductive
brothels," gestating on demand. A guy will pick an ovum for attractive qualities like smart,
tall and allergy-free, then have it inserted into some faceless surrogate mother employed as
a reproductive slave.

What about sex, though, meaning the experience, not the category? Chances are, we will be
having sex with machines, mostly computers. Even today you can buy interactive cd-roms
like Virtual Valerie, and there's talk of full-body, virtual-reality sex in which the pleasure
seeker wears a specially fitted suit--very specially fitted--allowing for tactile as well as
audiovisual sensation. If that sounds farfetched, consider the fact that cyber-innovation is
currently in the hands of social skills-challenged geeks who couldn't hope to get a date
without flashing their Internet stock options.
Still, there's a reason why the Big Divorce scenario isn't likely to work out, even by Y3K:
we love each other, we males and females--madly, sporadically, intermittently, to be sure--
but at least enough to keep us pair bonding furiously, even when there's no obvious
hardheaded reason to do so. Hence, despite predictions of the imminent "breakdown of the
family," the divorce rate leveled off in the 1990s, and the average couple is still hopeful or
deluded enough to invest about $20,000 in their first wedding. True, fewer people are
marrying: 88% of Americans have married at least once, down from 94% in 1988. But the
difference is largely made up by couples who set up housekeeping without the blessing of
the state. And an astounding 16% of the population has been married three times--which
shows a remarkable commitment to, if nothing else, the institution of marriage.

The question for the new century is, Do we love each other enough--enough, that is, to
sustain the old pair-bonded way of life? Many experts see the glass half empty:
cohabitation may be replacing marriage, but it's even less likely to last. Hearts are routinely
broken and children's lives disrupted as we churn, ever starry-eyed, from one relationship to
the next. Even liberal icons like Hillary Rodham Clinton and Harvard Afro-American
studies professor Cornel West have been heard muttering about the need to limit the ease
and accessibility of divorce.

Hence, perhaps, Scenario B: seeing that the old economic and biological pressures to marry
don't work anymore, people will decide to replace them with new forms of coercion.
Divorce will be outlawed, along with abortion and possibly contraception. Extramarital
hanky-panky will be punishable with shunning or, in the more hard-line jurisdictions,
stoning. There will still be sex, and probably plenty of it inside marriage, thanks to what
will be known as Chemically Assisted Monogamy: Viagra for men and Viagra-like drugs
for women, such as apomorphine and Estratest (both are being tested right now), to reignite
the spark long after familiarity has threatened to extinguish it. Naturally, prescriptions will
be available only upon presentation of a valid marriage license.

It couldn't happen here, even in a thousand years? Already, a growing "marriage
movement," including groups like the Promise Keepers, is working to make divorce
lawyers as rare as elevator operators. Since 1997, Louisiana and Arizona have been
offering ultratight "covenant marriages," which can be dissolved only in the case of
infidelity, abuse or felony conviction, and similar measures have been introduced in 17
other states. As for the age-old problem of premarital fooling around, some extremely
conservative Christian activists have launched a movement to halt the dangerous practice of
dating and replace it with parent-supervised betrothals leading swiftly and ineluctably to the
altar.

But Scenario B has a lot going against it too. The 1998 impeachment fiasco showed just
how hard it will be to restigmatize extramarital sex. Sure, we think adultery is a bad thing,
just not bad enough to disqualify anyone from ruling the world. Meanwhile, there have
been few takers for covenant marriages, showing that most people like to keep their options
open. Tulane University sociologist Laura Sanchez speculates that the ultimate effect of
covenant marriages may be to open up the subversive possibility of diversifying the
institution of marriage--with different types for different folks, including, perhaps someday,
even gay folks.
Which brings us to the third big scenario. This is the diversity option, arising from the
realization that the one-size-fits-all model of marriage may have been one of the biggest
sources of tension between the sexes all along--based as it is on the wildly unrealistic
expectation that a single spouse can meet one's needs for a lover, friend, co-parent,
financial partner, reliably, 24-7. Instead there will be renewable marriages, which get re-
evaluated every five to seven years, after which they can be revised, recelebrated or
dissolved with no, or at least fewer, hard feelings. There will be unions between people
who don't live together full-time but do want to share a home base. And of course there will
always be plenty of people who live together but don't want to make a big deal out of it.
Already, thanks to the gay-rights movement, more than 600 corporations and other
employers offer domestic-partner benefits, a 60-fold increase since 1990.

And the children? The real paradigm shift will come when we stop trying to base our entire
society on the wavering sexual connection between individuals. Romantic love ebbs and
surges unaccountably; it's the bond between parents and children that has to remain
rocklike year after year. Putting children first would mean that adults would make a
contract--not to live together or sleep together but to take joint responsibility for a child or
an elderly adult. Some of these arrangements will look very much like today's marriages,
with a heterosexual couple undertaking the care of their biological children. Others will
look like nothing we've seen before, at least not in suburban America, especially since
there's no natural limit on the number of contracting caretakers. A group of people--male,
female, gay, straight--will unite in their responsibility for the children they bear or acquire
through the local Artificial Reproduction Center. Heather may routinely have two
mommies, or at least a whole bunch of resident aunts--which is, of course, more or less
how things have been for eons in such distinctly unbohemian settings as the tribal village.

So how will things play out this century and beyond? Just so you will be prepared, here's
my timeline:

Between 2000 and 2339: geographical diversity prevails. The Southeast and a large swath
of the Rockies will go for Scenario B (early marriage, no divorce). Oregon, California and
New York will offer renewable marriages, and a few states will go monosexual, as in
Scenario A. But because of the 1996 Defense of Marriage Act, each state is entitled to
recognize only the kinds of "marriages" it approves of, so you will need a "marriage visa"
to travel across the country, at least if you intend to share a motel room.

Between 2340 and 2387: NATO will be forced to intervene in the Custody Wars that break
out between the Polygamous Republic of Utah and the Free Love Zone of the Central
Southwest. A huge refugee crisis will develop when singles are ethnically cleansed from
the Christian Nation of Idaho. Florida will be partitioned into divorce-free and marriage-
free zones.

In 2786: the new President's Inauguration will be attended by all five members of the
mixed-sex, multiracial commune that raised her. She will establish sizable tax reductions
for couples or groups of any size that create stable households for their children and other
dependents. Peace will break out.

And in 2999: a scholar of ancient history will discover these words penned by a gay writer
named Fenton Johnson back in 1996: "The mystery of love and life and death is really
grander and more glorious than human beings can grasp, much less legislate." He will put
this sentence onto a bumper sticker. The message will spread. We will realize that the sexes
can't live without each other, but neither can they be joined at the hip. We will grow up.

Barbara Ehrenreich is author of the forthcoming book Nickle-and-Dimed: Surviving in
Low-Wage America

Will A Woman Become Pope?
A wave of female pastors may make the Vatican look like a lonely male bastion
By DAVID VAN BIEMA

Are you kidding? Not in this century, anyway. But let's drop down the hierarchy a little to a
more approachable rank of shepherd. Forget, for a moment, Catholic or Protestant, priest or
minister. What are the odds that your primary spiritual guide and mentor will be a woman?
Fifty percent, minimum.

That shouldn't surprise anyone who's been paying attention over the past three decades. In
the early 1970s, women streamed into the seminaries at the same time they were marching
into other white-collar professions. Many, notably the Episcopalians, did so literally on
faith, since their denominations barred female ministers. Today half the Christian branches,
plus Reform and Conservative Judaism, ordain women. (Islam does not allow female
immams.) The United Methodists count 7,039 female ministers (out of 44,536 total). In
1999 the small Unitarian Universalist Association recorded a landmark: a ministry that is
more than 50% female. Not every denomination will pack so many X chromosomes into
the pulpit, but with female attendance at seminaries now at 33% and rising, ordinations will
follow.

Won't the feminization of mainline Christian denominations simply drive more traditional
churchgoers to Evangelicalism? No doubt, at least in the short run. Yet there are feminist
undercurrents in conservative places as well. To be sure, in 1984 the Southern Baptist
Convention passed a resolution sanctioning women's service "in all aspects of church life
other than...leadership roles entailing ordination." But the convention cannot dictate to
individual congregations, and the number of ordained Southern Baptist women has
increased each decade. Although most females attending Baptist seminaries (up to 40% at
some schools) have no intention of targeting a pastorate, many other Baptist women have
enrolled in more liberal denominations' schools.

Something more subtle is afoot elsewhere on the traditional right. Pentecostalism, while it
shares the Baptists' scriptural conservatism, relies heavily on the workings of the Holy
Ghost, which are as likely to touch a woman as a man. Women seldom preach unaided, but
Jim and Tammy-style co-pastors are common. And while megachurches may seem the
creatures of their high-powered male senior ministers, the bulk of their person-to-person
spiritual business is done not on supercrowded Sundays but in dozens of small weekday
prayer, study or self-help groups--often led by lay women.

It is here that the feminist impulse merges with an equally dynamic strain in American
religion: the empowering of the laity. All sorts of Christians (and Jews and Buddhists) are
tempering the CEO model of leadership with one that allows churchgoers to be pastoral
counselors or high-ranking administrators. Again, women have rushed to fill the gap.

Which returns us to Catholicism. The Second Vatican Council of 1962-65 set off what
religion futurist Richard Cimino calls "an explosion of lay ministry." This, plus a persistent
priest shortage, caused some parishes to approximate a female pastorate. Circuit-riding
priests would stop by a church to celebrate Communion and hear confession. In between,
however, women--church trained and often called pastor--ran the parish. Eventually there
were at least 300 such arrangements (some say there were thousands), but after John Paul
II's 1994 letter banning further talk of ordaining women, the movement tailed off. The
Pope's decree, however, did not affect "spiritual directors," one-on-one religious tutors who
have thrived since Vatican II. Many of them are women.

"If present trends continue, the chances that your spiritual director will be a woman are
70%," says Emory University provost Rebecca Chopp. She means the term broadly, to
include the whole range of religious authority: Catholic and Protestant, ordained and lay,
trans-denominational (like chaplains) or outside denomination all together. It may seem a
bold prediction. But it merely suggests a nation where religious leadership will more
closely reflect the population in its pews.

Will Politicians Matter?
Religion will increasingly replace electoral politics as the realm where battles for the
national soul are fought
By PETER BEINART

In the fall of 1995, Louis Farrakhan led the most celebrated African-American march in
Washington since the 1963 March on Washington. A stone's throw from the spot where
Martin Luther King declared, "We've come to our nation's capital to cash a check" for the
"riches of freedom and the security of justice," Farrakhan voiced a new black generation's
claim upon its still largely white government. And it was no claim at all.

"Freedom can't come from staying here and petitioning this great government," Farrakhan
thundered. "Freedom cannot come from no one but the God who can liberate the soul from
the burden of sin."

Last February, Paul Weyrich came to a similar conclusion. In an open letter to his Free
Congress Foundation, Weyrich, perhaps the most influential conservative strategist of the
past two decades, declared his life's work a failure. "Conservatives," he wrote, "have
learned to succeed in politics. That is, we got our people elected. But that did not result in
the adoption of our agenda. The reason, I think, is that politics has failed."

Two radically different leaders. Two calls for political secession. And a glimpse, possibly,
of the 21st century's antipolitical response to the lessons of the end of the 20th.

The 1990s, after all, were a decade when first liberals, then conservatives, scored thrilling
political victories, only to find those victories strikingly irrelevant to society at large. For
12 long years, Democrats watched Ronald Reagan and George Bush hack at the
government safety net they held dear. Finally, in 1992, the party wrenched itself from its
stupor, shook off its dead weight and found a winner. But when the Clintonites showed up
for work, sleeves rolled up and ready to reverse years of trickle-down social policy, they
received some bad news. In the post-cold war world, their new Wall Street buddies
informed them, you couldn't pump government money into the economy and watch it
spring to life because the bond market, punisher of fiscal indiscipline, would force up
interest rates and slam its foot on the brake. Bill Clinton adapted; he cut spending and the
deficit, thus handing over the economic reins to Alan Greenspan. Not a bad strategy, except
that honest liberals must now admit that inequality is greater, the safety net is thinner, and
capitalism is fiercer after two terms of Democratic occupancy of 1600 Pennsylvania
Avenue than it was before. Clinton trumped the Republicans. But market power trumped
government power. And that mattered more.

The Republicans soon discovered the same thing. In 1994 they won the most stunning
congressional victory of the late 20th century. And where they tried to roll back
government, they had some success--they cut back welfare and agricultural subsidies and
abolished the national speed limit. But where they tried to wield government power--to
remoralize a culture they believed was degenerating before their eyes--they hit a wall.
Under G.O.P. congressional control, government sanctions against abortion and
homosexuality have, if anything, grown weaker. And when the G.O.P. tried to rally the
public against a President they believed epitomized all that was wrong in the culture, the
public refused to get on board. Americans, it turned out, were becoming less morally
judgmental, and government could do little about it. Conservatives might influence
Washington, but Washington wasn't influencing the culture.

And so activists of all stripes have started looking for a back door. Twenty-first century
America will surely witness multiple ideological trends and clashes--left, right,
internationalist, populist, nativist. But what will separate it from past eras will be not its
particular factions or conflicts but the terrain on which they play themselves out. And the
primary terrain will not be electoral politics. Battles for the national soul will be won or lost
not in presidential elections or budget battles. They will be won or lost in civil society, and
the passion to shape civil society will come largely from the one force able to inspire men
and women in the way politics once did: religion.
Barely any other industrial democracy has seen belief in government fall as low as it has in
the U.S., and almost no other has seen belief in religion remain so high. The percentage of
Americans who vote and who say they trust government has dropped like a stone over the
past four decades. But the percentage who worship regularly and say they believe in God
remains higher than in almost any nation in Western Europe. According to Gallup, two-
thirds of Americans, more than at any point in the 1970s or '80s, say religion can solve all
or most of today's problems. Some, like the University of Chicago's Robert Fogel, suggest
that we are in the grip of a great awakening, a surge of religious fervor that wells up
cyclically in American history.

One of the reasons religion is up and government is down is that religion is increasingly
doing what politics once did: offering an alternative to the values of the free market.
Americans have a love-hate relationship with capitalism. It brings wealth and liberty, which
they love, as well as materialism and individualism, which they fear. When government
prestige is high, Americans look to it to provide the sense of common purpose that the
market does not. But over the past two decades, many of the social programs meant to
temper market inequality have been judged to be wasteful and counterproductive. And the
money-drenched campaign-finance system often makes politicians seem like just another
commodity, bought and sold by corporations. Religion, by contrast, seems to offer a
principled critique of the narcissism and rootlessness of contemporary America. Polling by
Princeton University sociologist Robert Wuthnow shows that the Americans most bothered
by society's materialism and lack of community are also the ones most likely to attend
church regularly.

And the houses of worship aren't only preaching these new values; they also seem more
effective in carrying them out. Churches get things done because they generate, in the think
tank-speak of the day, social capital. They possess the moral authority to call people to
service on behalf of others, something politicians generally lack the stature even to try.
Americans, Everett Carll Ladd of the Roper Center writes in the Ladd Report, are more
than twice as likely to volunteer as people in Germany or France. And the percentage of
Americans volunteering, unlike participating in government, is going not down but up; it
more than doubled between 1977 and 1995, from 26% to 54%. Andrew Greeley, a Roman
Catholic priest and professor of social science at the University of Chicago, argues that
these high rates stem largely from America's religiosity. Americans who attend religious
services weekly are twice as likely to volunteer their help as those who attend barely at all.
One-third of the people who volunteer for even purely secular causes cite religious
conviction as their motive.

Government, of course, isn't going away, but it is possible to imagine a future America in
which it does not mediate the great national struggles. Already, growing numbers of
conservatives are suggesting that the battle against abortion should be waged culturally, not
politically. George W. Bush and John McCain rarely emphasize new laws to criminalize
abortion. Instead they stress a hearts-and-minds strategy based on moral suasion--
essentially turning the President into a cheerleader for community efforts to offer women
abortion alternatives.

In Blinded by Might, former Moral Majority staff members Cal Thomas and Ed Dobson
ask whether it is better to send "money to an organization vowing to pass legislation that
would restrict the practice [of abortion] or volunteer at a local crisis pregnancy center."
Their implied answer is clearly the latter. And there is mounting evidence that even within
the Christian right, that perspective is winning the day. In the late 1990s, the hyperpolitical
Christian Coalition went into steep decline, while Promise Keepers, an organization that
believes many of the same things but shuns political action as the route to achieve them,
has grown.
On the left, the antiglobalization activists who came of age at the World Trade
Organization conference in Seattle see government as neither the problem nor the solution.
They carried signs saying END CORPORATE RULE. And the masked hoodlums who
turned the city into a battleground didn't trash government offices; they went after
Niketown, Starbucks and the Gap. The labor-environmental coalition that stymied the
WTO wants powerful global organizations that will punish companies that exploit workers
and pollute the countryside. Ask them what they want Congress to do, and you'll get a
puzzled look. For them, Congress is pretty much beside the point.

Establishment liberals acknowledge government's limitations in other ways. After the
Littleton, Colo., tragedy, not only conservatives but even many liberals admitted that the
shootings spoke to a void that political solutions could not fill. Yes, Democrats pushed gun
control, but House minority leader Richard Gephardt also said, "The answer won't be found
in state legislatures or the halls of Congress." In his speech at a Columbine High School
memorial service, Al Gore devoted two sentences to gun control and much of the rest of his
speech to Scripture, which he invoked eight times.

Gore has endorsed government funding of religious social-service programs, joining a
growing consensus that in antipoverty policy, government can no longer lead the charge.
Neither he nor Bush wants government to stop spending money on the poor, but both imply
something almost as radical: that religious organizations know better how to use that
money to change lives. Both candidates call for a partnership in which faith-based groups,
not the government, take the lead in formulating policy. Gore says government should help
"not by dictating solutions from above but by supporting the effective new policies that are
rising up from below." Bush says religious groups should not "be dictated to by
government. We want them involved on their terms, not ours."

Behind these statements is a massive, decades-long decline in government's intellectual and
moral self-confidence. Once upon a time, government service was considered the highest
aspiration of every bright young thing who wanted to use his or her brainpower to make the
world a better place. We may be entering an age when that is no longer the case--when
even high government officials are seen as mere functionaries, following the lead charted
by civic groups. The media are already lavishing attention on a new category of activists--
from antigang crusader the Rev. Eugene Rivers in Boston to Chuck Colson, who runs the
Prison Fellowship in Texas, to philanthropist George Soros--who are offering responses to
entrenched social problems that basically ignore politics. Not all these efforts stem from a
shared ideological base, and many of the divisions that have long showed up in state
legislatures may also increasingly divide churches and communities. But at the very least,
the chronic decriers of American apathy will have to start defining idealism more broadly.
A recent poll sponsored by former Clinton chief of staff Leon Panetta found that only 27%
of college students discuss politics at least three times a week, but almost three-fourths
have volunteered in the past two years. "Students are very tuned in to public issues and
community involvement," said Panetta. "They're just not expressing that interest through
participation in electoral politics."

In late 1965, TIME's cover bore the image of John Maynard Keynes. The British economist
was 19 years dead, but his arguments that government intervention could mitigate
capitalism's savage inequities were partly responsible for the surge of governmental self-
confidence that underlay Lyndon Johnson's Great Society. Early the following spring,
TIME's stark black cover bore the words IS GOD DEAD? The accompanying article spoke
of the "brutal reality that the basic premise of faith--the existence of a personal God who
created the world and sustains it with his love--is now subject to profound attack." It is
difficult to imagine a TIME cover in the year 2025 bearing Keynes' visage or indeed that of
any such prophet of governmental mastery.

But it is not hard to imagine God's making it again. Next time, he's likely to be pictured as
alive and well.

Peter Beinart, a contributor to TIME, is the editor of the New Republic

Will We Ever Log Off?
Eating, playing racquetball, brushing our teeth--surely we won't be doing everything online
in the future? Don't be too sure
By ROBERT WRIGHT

During the past two years, the amount of time the average Internet user spends online each
week has risen from 4.4 hours to 7.6 hours. If that annual growth rate, 31.5%, holds up,
then in 2025 the average Internet user will spend 590 hours online per day!

O.K., so extrapolation has its limits as a predictive tool. Still, you have to wonder. As
cyberspace absorbs more and more of our work, play, shopping and socializing, where will
it all end? What activities will still be off-line in 2025?

A few candidates spring to mind. Brushing your teeth. Eating. Playing tennis. Right? Not
so fast. Even some of these seemingly solid barriers to the Internet's encroachment are
shaky. A quarter-century from now, almost everything will in principle be doable online,
and convenience will often argue for doing it there.

Bear in mind, for starters, that in 2025 the average American will have, as they say in
technical circles, bandwidth out the wazoo. You won't just be able to monitor your child's
day care by webcam (a service already offered by more than 100 day-care centers). You'll
be able to monitor it in high-definition 3-D format, providing valuable perspective during
slo-mo replays of block-throwing incidents.

And this is only the beginning. Just ask Jaron Lanier, who coined the term virtual reality.
Lanier is chief scientist for the "tele-immersion" project, part of the federally subsidized
research program known as Internet2, which explores the upshot of massive bandwidth and
computing power.

The standard virtual-reality experience, you may recall, involves donning a head-mounted
display or special glasses--or, in principle, contact lenses--and thus entering a computer-
generated fantasy world. As you turn your head or walk around, the computer adjusts your
perspective accordingly. Tele-immersion is to videoconferencing as virtual reality is to Pac-
Man. If it works, it will give you the visual experience of being in the same room with a
person who is actually in another city.

So what's the killer app for tele-immersion? "It's not so much a matter of particular
applications," says Lanier. "It will just become part of life. It will be used by teenage girls
to gossip, by business people to cut deals, by doctors to consult." And presumably by
people who want to do long-distance lunch. Of course, there won't be any point in saying
"Pass the squash," but otherwise it will be a normal mealtime conversation. Eating online.

Speaking of squash, playing racquet sports at long distance is in principle simple. If you've
got a squash-court-size space to run around in, you can play with your college buddy,
wherever he or she may be. (And no more annoying collisions with opponent, walls or ball,
since all three will be illusions.) In fact, using standard virtual-reality technology, people
have already played tennis remotely, Lanier says. But each looked to the other like a
cartoon character--an "avatar." Tele-immersion will let you see the agony of defeat on the
face of your vanquished foe. A big advance.

Mushrooming bandwidth and computing power aren't the only things drawing us deeper
into cyberspace. Global-positioning satellites are turning driving a car into an intermittently
online experience. And the high-resolution satellite images that became commercially
available last year will move the earthiest of endeavors into cyberspace. Sitting at a desk
will soon be the fastest way for farmers to inspect their crops for signs of blight.

Obviously, some pastimes lose something when performed online. (No, I'm not going to
talk about sex.) Consider hiking. True, you could don your head-mounted display and get
on your treadmill while a friend did the same in another city. If you wanted a whiff of pine
or cedar, you could crank up the computer-controlled aroma synthesizer that the company
DigiScents has said it will market. Not too tempting, right?

Yet, even though hiking may remain reality-based, it will have its online elements. People
are already finding new hiking buddies over the Internet. Here lies the biggest import of the
expanding online experience. Even if tele-immersion is still crude in 2025, cyberspace will
have reshaped life because it will have kept doing what it has been doing--nourishing
shared enthusiasms. Even before most Americans had heard of e-mail, there were chat
groups with names like alt.fetish.foot and some environmentalists were mobilizing online.
But the more people online, the easier it is to find your own special interest, no matter how
narrow.
And as bandwidth grows, more of these narrow interests--recreational, political, cerebral--
can be pursued online. In 2025, the League of Women Who Find Gilligan More Attractive
Than the Skipper and the Professor can not only form online; it can tele-convene and watch
reruns! More and more, obsessions will be online obsessions.

This is the big downside of the future. Obsessions are fine, but every minute you spend
online--playing chess, talking politics or just shopping--is a minute you're not spending off-
line. And it is off-line, in the real world, where we find a precious social resource, people
we have little in common with. The supermarket checkout lady, the librarian, the shoppers
at the mall--all are handy reminders of the larger community we're part of--multicultural,
socioeconomically diverse yet bound by a common nature.

That's the trouble with cyberspace. It leaves nothing to chance. The Internet, with its antlike
order, is in some ways becoming a Web of gated communities. It could deepen cultural and
socioeconomic rifts even to the point of straining a nation's social fabric.

On the brighter side, it may bridge rifts between nations. Some interest groups, after all, are
transnational. So far these groups have mainly been political--environmental groups,
human-rights groups, labor groups. But transnational bonds will get richer, for two reasons.
First, automated translation is improving. (Go to babelfish.altavista.com to have any
document rendered in several languages--not perfectly but better than was possible 10 years
ago.) Second, autotranslation is merging with video to yield "face translation." Unveiled
last year by the Consortium for Speech Translation Advanced Research, face translation
lets you speak into a camera in English and be seen in Russia speaking Russian. And I
mean speaking Russian. Your face is morphed so that you seem to be pronouncing the
words of the language you don't really speak.

After demonstrating face translation to reporters, the head of the research consortium
admitted that "some of it still looks a little goofy." But 25 years will smooth out not just the
visual kinks but the translation itself. True international friendship, now available mainly to
business big shots, can in principle become a middle-class indulgence. Stamp collecting,
environmental activism, toe fetishes--all kinds of interests can kindle the citizen-to-citizen
amity that makes war politically difficult.

This has all been happening for a long, long time. Telephones made distance irrelevant to
talking, encouraging us to ignore next-door neighbors in favor of longer links. The
invention of writing had parallel effects. Paul's letters to the Corinthians and the Romans
nourished faraway contacts while reinforcing distinctions between Christians and nearby
nonbelievers. The expansion and crystallization of communities is, in a sense, the story of
history. But the story has never moved as fast as it is going to move in the next 25 years.

Robert Wright is an author whose most recent work is Nonzero: The Logic of Human
Destiny

Will We Still Go Out To The Game?
We won't need to, once we can experience every bone-crunching tackle and Mike Tyson
uppercut at home
By MARK LEYNER

Asking a die-hard sports spectator to predict how we will spectate in the future can cause
terrible cognitive dissonance. The sports fan is not oriented toward the future; he is the
retrospective creature par excellence. He travels forward with his eyes glued to the rearview
mirror. His preferred modes of spectating are historical--the highlight reel; the classic NFL
film with its sonorous, Homeric narration; and, most perfectly, the instant replay, which, of
course, instantly historicizes the present.

Obviously, the essential need to spectate will endure into the distant future. We can't talk
sports unless we watch sports. And talk we must. Sports blather will remain the lingua
franca in bars, elevators and doctors' waiting rooms around the world. In 2025, no matter
how far-flung or misbegotten a place he finds himself in, man will always be able to strike
up a lively conversation with the opening gambit "Livingston Bramble, Boom-Boom
Mancini, 1985. That was a fight."

We will continue to watch sports because it is one of the last areas in our life in which we
can experience the unexpected, the improvisational. But how we watch will change
radically. The two primary styles of live spectating--the self-aggrandizing preening of the
ringside celebrity and the self-annulling ecstasy of the anonymous face-painted fan--will
soon be relics of the past, available only in dioramas in sports museums.

Referring to the Golden Age of the Latin American caudillo, Ryszard Kapuscinski wrote
that "stadiums play a double role: in peacetime they are sports venues; in war they turn into
concentration camps." Well, in the future, in the synergistic bliss of the globalized
economy, stadiums and arenas will simply turn into malls and food courts. The live event--
the game itself--will become, at best, a point-of-purchase display. Already, most people
attending a basketball game rarely glance at the live action. They watch the Jumbotron
screens cantilevered above the court or the monitors mounted in the arena's various saloons
and emporiums.

Except for the opportunity to begrudgingly share cheese-drenched nachos with complete
strangers or stand in line and chat with other people who also have to urinate badly, there
will be no valid reason to attend a live sports event as a spectator. All sports, and especially
football, will continue to be better on TV. The only way to sustain interest in actual
attendance at sports events in the future will be to incorporate fans into the action. But
how?

Recent incidents, such as the projectile-throwing tantrum at the Yankees-Red Sox game at
Fenway Park and the abuse of European golfers at the Ryder Cup in Brookline, Mass.,
suggest an answer: legal hooliganism. Teams will sponsor cells of well-trained, remorseless
thugs ready at any moment to storm the field and waylay players. Athletes will be expected
to hone those skills necessary to contend with this exciting new variable.

For the rest of us, though, TV will more than suffice. And stunning technological advances
will revolutionize our viewing pleasure. Computer-enhanced television will enable us to
customize the content of broadcast sports coverage. Let's say, for example, that you only
want to watch players groom themselves and spit. The new personal-p.o.v. digital
technology will allow you to remain focused on any player, on the field or off, who's
picking his nose, adjusting his protective cup or spewing tobacco juice--without the
constant interruption of superfluous play-by-play coverage.

Watching is one thing, but what about having a vicarious sensory and kinesthetic
experience of your favorite sport? Within the next 50 years, neural-input units will become
as standard a feature of your entertainment console as the remote control. With this hairnet-
like apparatus sending complex algorithmic signals into your motor cortex and parietal
lobe, you'll actually feel what it's like to be slashed across the eyes by a high-sticking Tie
Domi. Seated on your couch, you'll writhe in agony from lactic-acid accumulation at the
end of an Ironman Triathlon. And you'll hop around your living room like a maniac as you
actually experience the excruciating pain of Mike Tyson's incisors on your ear.
Biotechnological innovations and gerontological research will also have a profound impact
on how you watch sports. Cloning capabilities, discoveries in the genetics of longevity and
advances in cryonics will enable scientists not only to extend the lives of current players
but also to revivify long-deceased athletes. This will finally eliminate the need for
stupefying talk-radio debates about how Honus Wagner might fare against Pedro Martinez
or whether Serena Williams could have beaten Helen Wills Moody.

The athletic icons of the ensuing century will to a large degree resemble those of today.
Egregious jackasses whose laziness and sense of entitlement have caused them to lose their
starting positions and who then succumb to unlimited sex, drugs and fried foods will
remain ascendant, their posters plastered on the walls of boys' bedrooms across America.
But it's pro wrestlers who will reign supreme, because they are the embodiment of sport as
soap opera. With wrestling's incomparable melding of the trailer park and Valhalla, its
intricate and interlacing narratives, its music, pyrotechnic stagecraft and glorification of
oratory, it is the Gesamtkunstwerk--the total artwork--of the sports and entertainment
world. Professional wrestling will be the single most powerful influence on all other sports
well into the 22nd century.

As more and more televised sport is pumped into our homes, how will our children be
affected? I need only extrapolate from my own experience. Several weeks ago, during her
final soccer game of the season, my six-year-old daughter, while chatting with a friend and
seemingly oblivious to the game, inadvertently allowed a ball to ricochet off her hip into
her own goal. She responded with a vicious throat-slashing gesture directed at her
teammates. She later spit at me, apparently mistaking me for an autograph-seeking fan. And
that night at a local restaurant, she confessed to intentionally throwing the game, having
placed a rather substantial wager on the opposing team.

Unfortunately, even these inchoate stirrings of competitive spirit will fade with maturity.
As William Wordsworth (whose brooding peregrinations of the Lake District constitute
perhaps the original Ironman sport) wrote, "Whither is fled the visionary gleam?/ Where is
it now, the glory and the dream?" Someday even my daughter, or her daughter's daughter,
will mist over at the memory of the androgen-swollen, coach-garroting, endorsement-
besotted free agent ridiculing his teammates after a tough loss. Like today's purists who
long for the bunt, the pick-and-roll and touch tennis, they too will pine for the good old
days.

Mark Leyner is a novelist and screenwriter. His most recent novel is The Tetherballs of
Bougainville

What Will We Do On Saturday Night?
Thanks to the bombardment of entertainment at home, we'll crave live experiences that
stimulate our senses, blow our mind and evoke awe
By JULIE TAYMOR

We thought movies would kill live theater. We thought television would kill movies. Now
we ask whether video/computer/ virtual-reality images streaming into our homes will keep
us on the couch Saturday night.

I doubt it. Entertainment works as a private but also a social event. Movies on the VCR
don't keep people from going to the multiplex, just as boxing on pay-per-view doesn't stop
fans from craving ringside seats. The senses are a huge part of the experience. The smell of
popcorn, the crush of bodies, the communal sense of anticipation and the space itself all
add to the story of the entertainment. Spaces, architectural enclosures, shape our relation to
an event. You can pray at home. But a church, mosque or synagogue gives prayer scale,
grace and a heightened sense of spirituality because of the aura of sacred ritual deemed
appropriate within the religious house. People cling to ritual. It gives structure and comfort
to chaos. It creates a sense of awe.

I need awe in my life. The Internet experience doesn't awe me. The act of sitting at a
computer is so unaesthetic and unsexy--all those cables, that horrid fluorescent screen,
those puny two-dimensional images. Give me a human voice over e-mail. I like the sound
of communicative delivery--the tone of voice, the innuendo. Of course, I appreciate
convenience when I'm using a search engine to find gifts, vacations or pieces of
information. But as a way to spend leisure time--not for me. The closer the World Wide
Web seems to bring us together, the truly farther apart we are. It's simply too safe, too
anonymous and too antiseptic. All those numbers, dots and letters. Some messy inkblots,
please. Who doesn't prefer ripping open a sealed envelope?

When cable television arrived to expand our viewing possibilities, it multiplied the
mindless rubbish you have to wade through to find something worth seeing. There is so
much information running rampant that the object of desire has been thoroughly obscured.
What is the desire that entertainment fills? We want to be touched emotionally, be
viscerally moved, perhaps have our minds challenged, or at best blown. We travel to a
different place when we enter the world of a storyteller. Some call it escape; some call it
experience.

Sensory experience. We live in our bodies. Our heart beats faster when we are scared; our
eyes tear when we are emotionally touched; and we howl, hoot, whistle and clap when
turned on. No amount of technological development will alter our basic instincts. Watch a
film in a large theater, and the experience will be doubly charged, not by the size of the
screen but by the energy of the audience. Coliseums, football stadiums, rock concerts--to be
a part of the action, as opposed to just being a voyeur, is as old as the ritual of performance.
Yet bigger does not necessarily mean better. Off-Broadway theater is doing very well right
now, and the physical intimacy of the live event has a lot to do with its success. Feeling that
your presence affects the event may become more and more important as we look to be
entertained in the 21st century.

I have no doubt that live theater will continue to exist. Its very liveness--its impermanence,
its vulnerability to error, its essential humanness--is its unique draw. Its survival will
depend on the tension-filled interplay between the raw and the cooked--the balance
between high and low technology, primal spirit and cultural concoction. Live theater will
blend with three-dimensional digital projection as that technology develops and becomes
economically feasible. And I look forward to films moving off the flat screen to invade the
dimensions of space. With the onslaught of interactive play, there will no doubt be
cinematic experiences in which the viewer can control the outcome of the story, as now
happens in video games. But interactivity is not the most important thing. If we, as social
animals, are to be lured away from the convenience and comfort of our home-entertainment
center, it will be for a shared physical, sensory experience that yields the unique and evokes
awe.
As we rave about the lifestyle changes that technology will bring, the essential questions of
substance and quality of content never seem to change. It took an artist's vision to imagine
and create the Taj Mahal, the Pyramids, the Duomo in Florence. Technology cannot
imagine. It's just a tool, like a chisel. I look forward to the vision and imagination of the
artists who will harness these transforming media to tell the old tales in a new way.

Julie Taymor is director of the musical The Lion King; the movie Titus; and The Green
Bird, due on Broadway this spring

Who Will Be The Next Elite?
Wasps once ruled the country. Then came a college-educated meritocracy. But to join the
new ruling class, you'll need a hot business
By NICHOLAS LEMANN

Elites are strange creatures. every society has one--at least one--that members and
nonmembers alike are intensely aware of. But only rarely is an elite a formal entity, with
stated membership criteria and a list of who belongs. Studying elites is thus an inexact
science.

Still, the direction in which the American elite is changing right now seems quite clear. We
are somewhere in the course of the greatest capitalist boom in our history. One result is that
capitalists will make up our country's next elite. The credential you will have to present to
enter that virtual room in which candidates for office are chosen, educational institutions
run, foreign alliances forged and social arrangements set will not be family background or
educational achievement. It will be having started a successful business and made a lot of
money at it.

You can already feel this happening, with the force of a riptide. The self-made American
rich are as celebrated, as respected, even as loved as they have ever been in our history and
maybe the history of any other country. They smile at us from magazine covers and give us
their opinions on television. Their charitable foundations, growing enormously, are taking
government's place as the national laboratory for public projects and social innovation.
Never mind the Microsoft antitrust suit. The literally murderous personal rage against rich
people that was so much a feature of American life at the outset of the 20th century is today
almost nowhere to be found.

The American elite 25 years from now won't charge an admission price exactly; still,
business success will be its way of assuring itself that an applicant has what it takes to
become a member. Those who haven't hit it big as entrepreneurs will somehow seem to
have talents that are merely peripheral. The qualities that the elite respects will be a kind of
aggressive and even ruthless energy and imagination. Superpromising young people will
set themselves on a course to become David Geffen, not Dean Acheson.

When this happens, it will bring us full circle. A century ago, this country had a capitalist
elite personified by such business titans as J.P. Morgan, John D. Rockefeller and Andrew
Carnegie. Then we spent the whole 20th century trying to replace it with other kinds of
elites--two of them, to be specific. Now we're headed right back where we started.

During the first half of the century, the American elite was a distinct, quasi-hereditary
group whose members were all men, all white and almost all Protestant (quite often
Episcopalian). They lived mainly along the Eastern Seaboard. They had gone to Ivy League
colleges, and often, before that, to boarding schools in New England. They belonged to the
same clubs, lived in the same suburbs and vacationed at the same resorts. They dressed,
spoke and looked a certain way. They were of English or Scotch-Irish stock. Exemplified
by Henry Stimson, who served as both Secretary of State and Secretary of War, they were
publicity-averse men who were more powerful than famous. A sociologist who was very
much a born-in member of this class, E. Digby Baltzell, bestowed two resonant names on
its members: white Anglo-Saxon Protestants (Wasps) and the Protestant establishment. In
historic terms, they were the gentlemanly replacements, in the American pilot's cabin, for
the robber barons who emerged during the capitalist boom after the Civil War.

What Will Make Us Laugh?
A comic wonders where the punch lines of the future will come from. And whether we'll
have any time left for the set-ups
By BEN STILLER

Trying to predict what will be funny in 25 years is as hard as trying to figure out what will
be funny five minutes from now. In fact, I would be hard pressed to identify what it is we
all laugh at today. Computer scientists, bioresearchers and gagmeisters are all in agreement
that the face of comedy will change drastically in the next quarter-century. But what will
that face look like? Will it have good skin and aquiline features, or will it be pockmarked
and disfigured? No one knows, not even our most respected comedic minds. I attempted to
get in touch with George Lucas, who last year brought us the highest-grossing comedy
feature of all time, Star Wars: Episode I--The Phantom Menace. But he didn't return my
calls. So here are several speculations, based on a fair amount of fact, as to what we have in
store.

Breeding New Comedians. While there has been much ballyhoo in the past decade over
progress in cinematic special effects with computer-generated imagery (cgi), no one has
been talking, at least publicly, about the incredible breakthroughs in the relatively nascent
field of comedic gene engineering (cge). Manipulating genes to alter the makeup of a
human's looks and personality has been in the realm of possibility for years. But the
prospect of doing it for comedic effect is just starting to take shape. Scientists are working
to isolate the specific genetic code responsible for what makes us laugh--the "funny-bone
gene," if you will. By breaking down the dna of such comedic greats of the past as W.C.
Fields, researchers are hoping that they can learn what it was in these classic funnymen that
made them funny. While the research has yet to hit pay dirt, an unexpected side benefit has
been the discovery of a new "alcoholism gene."

Once scientists succeed, the possibilities for comedic breeding are unlimited. By scraping
cells from the fingernail of Lucille Ball, say, and from one of Ed Asner's eyebrows, a
geneticist would have the tools necessary to fertilize the embryo of a child with specific
kinds of comedic potential. Though testing so far has only been done on pigs--not a
legitimate gauge, since it is hard to distinguish their laugh from an oink-snort--results are
promising. Some studios and networks are toying with the idea of "development nurseries"
that would venture to create the optimal candidates for sitcom stardom through gene
manipulation.

Serious moral and legal questions are already being raised by these gene experiments. For
example, if a clone of comedian Don Rickles were to grow up and entertain a whole new
generation of audiences by insulting them with the term hockey puck, would the Rickles
estate be entitled to royalties? Or could Ricklebaby claim them as his own, since it was his
natural instinct to use the term? And for that matter, can anyone at all claim ownership of
the term hockey puck, including the National Hockey League?

Fast Is Funny. As we race through the new millennium, one thing is certain: time will be
our greatest commodity. In the past 20 years, we have seen the pace of almost every aspect
of life speed to dizzying proportions, and in the future attention spans will get shorter than
we can imagine. The time it takes to make us laugh will thus be the true test of the most
successful entertainer. Stand-up comedy, nearly extinct in the '90s, will have trouble
surviving at all. The reason? No time to go to a club to hear a live human being tell jokes,
or wait through 45 minutes of Late Night with Conan O'Brien before the stand-up appears.
People looking for a laugh will simply log on and sift through the database of every joke
and routine any comedian has ever told on TV, radio or record. The computers will then
cancel out redundant routines--any piece of material that has been done by more than one
comedian. This will leave Lenny Bruce, George Carlin and Robert Klein as the only
comedians we will ever need to listen to.

Though TV sitcoms already have an incredibly fast-paced, joke-every-3-sec. tempo, in the
future that will seem to have an almost Shoah-like pace. One possibility is that story lines
will be junked altogether to get straight to the laugh--since, as any sitcom producer will tell
you, the same seven plots have merely been recycled endlessly since the beginning of
television. Realizing that likable characters are the key to a TV comedy's success, the
networks will establish new characters in 2- to 3-min. "mini-coms." Then, after viewer
response is gauged via an Internet hookup, those favorites will appear on the air regularly,
in 5- or 6-sec. bursts, much like the network promos we now see. You know the ones: Ross
from Friends dancing in a bubble held by Rachel, then getting "popped" by a giggling
Monica. These short bursts will eventually replace the shows themselves, allowing viewers
at home the chance to watch their favorite comedy character make faces, dance a jig and
convey a warm sense of familiarity, without all the boring stuff in between. This will,
however, also lead to the extinction of the "comedy writer" as we now know the species.
Of course, I believe, as many do, that all these changes will become moot owing to the
coming invasion of Superior Beings from the 12th Quadrant of the Fifth Dimension, known
to us earthers as the Great Nebula in Orion (see my essay "Who Will Destroy Our
Civilization and Lay Waste to Life on Earth as We Know It in the Next Century?" in the
next Visions issue). If the indicators are correct, the mother ship of these Ecstatic Beings
will emit a cloud of radioactive gas that blankets the planet, wiping out all life for a period
of 10,000 years. Then and only then shall comic reconstruction begin.

At that point there will be a new assessment of what is funny. And my bet is there will be a
major rediscovery of the comedian Carrot Top. If there is one thing for sure in comedy, it's
that props always get a laugh.

Ben Stiller directed and starred in the movie Reality Bites and co-starred in There's
Something About Mary

What will we wear?
Comfortable, adaptable fabrics will dominate the future
By MICHELE ORECKLIN AND STACY PERMAN
Giorgio Armani. John Bartlett. Randolph Duke. Tommy Hilfiger. Vera Wang






















Will Teenagers Disappear?
As kids grow up ever faster, that carefree age known as adolescence may soon be a
memory
By WALTER KIRN
Of all the great postwar inventions-- television, rock 'n' roll, the Internet--the greatest and
most influential is, perhaps, the American teenager. Think about it. While the country has
always had adolescents (human beings between the ages of 12 and 18, that is), it was only
in the past 50 or 60 years that it had tens of millions of semi-grownups living in a
developmental buffer zone somewhere between childish innocence and adult experience.

This teenage culture of pop songs, cars and acne ointments, of proms, allowances and
slumber parties is still unknown in less developed countries. And until the reform of child-
labor laws in the 1930s, the spread of suburbia in the 1940s and the rise of targeted youth
marketing in the '50s, it was unknown here as well. Early 20th century adolescents were
farmers, apprentices, students and soldiers--perhaps even wives and husbands--but not
teenagers.

Spawned by a mix of prosperity and politics, teenagers are a modern luxury good. The
question for the new century is, How much longer will teenagers exist, at least in the form
that James Dean made famous? Twenty years, tops, is my guess. Teenagers, as classically
defined, are already dying out, or at least changing into something different. The buffer
zone they once inhabited is being squeezed out of existence for two reasons: children are
growing up faster than ever before, and adults are growing up more slowly.

A few random facts. In the 1800s, social historians tell us, the average girl began to
menstruate at 15; now the average age is 12. According to a recent national survey, 63% of
teens reported using a computer in the 30 days previous to being polled. (For adults 50 and
older, by the way, the figure was a mere 20%.) Not long ago, I, a 37-year-old, suffered a
lapse of Internet access that was repaired by a 16-year-old who charges $50 an hour for his
expert labor and trades stocks over the Web in his spare time. By comparison, when I was
16, I worked in a gas station for pocket change and thought that all stockbrokers lived in
New York City.

An adolescent with his or her own money--real money, not parental charity--is not, in any
meaningful sense, a teenager, but a capitalist early bird out to get the worm. This truth
informs those ads for Internet stockbrokers in which young punks with goatees and
ponytails give investment advice to balding bosses or land private helicopters in their
parents' backyard. Exaggerations? Forty-year-olds wish. Not when silicon billionaires like
Jerry Yang of Yahoo (31 and worth more than $3 billion) have proved that the traditional
interval between a boy's first shave and his first million need not be much of an interval at
all. All over the nation's high-tech landscape, people are retiring within years of taking their
first legal sip of alcohol. Soon they'll be retiring before driving age. This won't be a
problem for them, however, because they'll be able to afford chauffeurs.

The right to be economically unproductive until the day after college graduation--
amendment one to the teenage constitution--will seem incredibly quaint if not downright
crazy in a few years. Fourteen-year-olds in 1950 were not expected to know how to use
metal lathes even if one day they might end up working for General Motors. But nowadays
14 is rather late to get in the cyberharness for a position somewhere down the road at
Oracle. This trend will only continue and even speed up as parents and children alike see
the advantages in mastering change at an early age, when human beings are most adaptable,
instead of in their 20s, when there's a risk that they'll be behind the curve. And it's in the
national interest to encourage this, since one solution to supporting a populace top-heavy
with retirees is putting the young to work as soon as possible.

The next distinction to vanish will be social. One thing that used to make teenagers
teenagers was the postponement of family responsibilities, but these days even 30- and 40-
year-olds are postponing family responsibilities, often permanently. Coming of age is
becoming a lifelong process--it's not just for Holden Caulfield anymore. Teenagerhood as
preparation for life makes no sense when the life being prepared for resembles the one
you've been living all along. Meanwhile, teenagers are discovering that there are medical
ways to escape the angst part of growing up. Why have an existential crisis if you can be on
Prozac? If current trends in psychiatry keep up, there won't be a drug or a diagnosis that
kids and their parents won't be able to share. But a teenager on Prozac is not a teenager; he's
a depressive studying for a driver's license. (And when Viagra starts being prescribed for
shy 16-year-olds, we can be sure that teenagerhood has passed.) Personally, I foresee a time
when people will enter "recovery" at 13 and pen best-selling memoirs on their struggles
before they've even taken their sats. Expect a crop of precocious old souls filling the talk
shows with painful reminiscences of their abrupt descent into addiction after, in the months
preceding their Bar Mitzvah, their Internet start-up lost half its market cap because of an
unforeseen jump in interest rates.
The teenage years, as formerly defined, were a time for people to get away with things, to
make mistakes and not really have to pay for them. The legal system has changed all that
by trying kids as adults for serious crimes. And teenagers have contributed to this shift by
committing so many of them--or at least so many horrific ones. In the future, however, even
minor infractions once considered normal high jinks will draw severe reactions from the
authorities. In 1999, brawling at a football game could get a kid expelled from school for
years; in 2025, a spitball may get him life. As the penitentiary replaces detention, expect a
generation of Goody-Two-Shoes too frightened to chew gum. Indeed, statistics tell us that
youthful crime is decreasing already, and it's no wonder.

What will a world without teenagers look like? Like the adult world does now. Adolescents
will feel the same pressures as their parents do: to succeed financially, to maintain their
health, to stay on society's good side. What's more, adolescents will field these pressures
using their elders' traditional techniques: spending money, taking medication, contracting
for professional advice. The carefree years will become the prudent years, and the prudent
years will continue throughout life. That's how it used to be, in the 19th century, and that's
how it will be again in the 21st. The age of James Dean, the Ford Mustang and making out
will seem, in retrospect, like what it was: a summer vacation from larger human history.

Walter Kirn's most recent book is Thumbsucker: A Novel.

Will We Have Any Privacy Left?
When our whole life is turned into data, spies will have access to our past, our present--and
even our future. But despair
By DAVID GELERNTER

Our bad dreams about the haunted house called "Privacy, Circa 2025" are likely to focus on
those all-seeing orbiting spy cameras that are always peering at us. They already exist,
capable of observing from miles overhead that your lawn could use mowing and your dog
needs a shampoo. By 2025, they will be really good. Audio spy technology has been
advancing fast too. But the biggest threat to privacy doesn't even exist yet. By 2025 it will
be in full bloom.

Today we are engulfed by the signal-carrying waves of broadcast radio and TV. Come
2025, we will be engulfed by a "cybersphere" in which billions of "information structures"
will drift (invisible but real, like radio waves) bearing the words, sounds and pictures on
which our lives depend. That's because the electronic world will have achieved some
coherence by 2025. Instead of phone, computer and TV networks side by side, one network
will do it all. TVs and phones and computers will all be variations on one theme. Their
function will be to tune in these information structures in the sense that a radio tunes in
station WXYZ.

These cyberstructures will come in many shapes and sizes, but one type, the "cyberstream,"
is likely to be more important than any other. A cyberstream is an electronic chronicle of
your daily life, in which records accumulate like baroque pearls on an ever lengthening
string--each arriving phone call and e-mail message, each bill and bank statement, each
Web bookmark, birthday photo, Rolodex card and calendar entry.

An irresistible convenience: your whole life in one place. Tune in anywhere, using any
computer, phone or TV. Just put your card in the slot, pass a security test (supply your
password and something like a fingerprint) and you're in. You see your electronic life
onscreen or hear a description over the phone, starting with the latest news and working
back.

By feeding all this information into the food processor of statistical analysis, your faithful
software servants will be able to make smooth, creamy, startlingly accurate guesses about
your plans for the near future. They will find patterns in your life that you didn't know were
there. They will respond correctly to terse spoken commands ("Call Juliet," "Buy food,"
"Print the news") because they will know exactly who Juliet is, what food you need and
what news stories you want to read.

So it's 2025, and the living is easy. You glide forward on a magic carpet woven out of
detailed data and statistical analyses. But should anyone seize access to your electronic life
story, "invasion of privacy" will take on a whole new meaning. The thief will have stolen
not only your past and present but also a reliable guide to your future.

Such information structures are just beginning to emerge. They are likely to be far safer and
more private than anything we have ever put on paper. Nonetheless, by 2025, a large
proportion of the world's valuable private information will be stored on computers that are
connected to a global network, and if a thief can connect his computer to that same global
network, he will have--in principle--an electronic route from his machine to yours.

The route will be electronically guarded and nearly impassable, unless the intended target
has given out information he should not have--as people do. And unfortunately, electronic
thievery and invasion of privacy are jackpots that keep growing. They are just the crimes
for shameless, cowardly, clever crooks. No need to risk life or limb; just tiptoe over wires
and through keyholes.

So what else is new? Technology always threatens privacy. Those threats usually come to
nothing. They have been defeated before, and will be in the future, by a force that is far
more powerful than technology--not Congress, the law or the press, not bureaucrats or
federal judges, but morality.

You could, after all, get a pair of high-power binoculars and start spying on your neighbor
tomorrow morning. But you won't. Not because you can't, not because it's illegal, not
because you're not interested; to be human is to be a busybody. You won't do it because it is
beneath you. Because you know it is wrong, and you would be ashamed of yourself if you
did it.
Laws are bad weapons in the fight to protect privacy. Once we invoke the law, the bad deed
has ordinarily been done, and society has lost. Attempting to restrain technological progress
is another bad strategy--it's a fool's game and won't work. The best method for protecting
privacy in 2025 is the same method we have always used: teaching our children to tell right
from wrong, making it plain that we count on them to do what is right.

Outrageously naive advice for a high-tech future? Think again. It has been field-tested, and
it works. All over the country, people leave valuable private papers in unlocked mailboxes
along the street. Astonishing! Suburban mail is a vastly easier mark than anything in
cyberspace will ever be. But our mailboxes are largely safe because we are largely honest.
Some technology pundits have been startled by people's willingness to confide their credit-
card numbers to websites. But for years we have been reciting those numbers over the
phone. And we have all sorts of other long-standing habits (paying our taxes, for example)
that reflect our confidence in the honor of our fellow citizens.

As we venture further into the deep waters of technology, temptations increase. When it
comes to temptation resistance, we are admittedly not at the top of our game in early 2000.
This is an age of moral confusion. We love to talk about law; we hate morality talk. But we
will snap out of this dive, as we have snapped out of others before. Among our
characteristic American obsessions, two have been prominent since 1776--our
technological inventiveness and our stubborn desire to know and do what is right.

And by 2025, the issue will be framed differently. We are obsessed with privacy because
we have temporarily mislaid a more important word: dignity. We talk about our "right to
privacy," but we don't really mean it. This broken-down, ramshackle idea falls apart the
moment you blow on it. Privacy to commit murder? To beat a wife or child? To abuse an
animal? To counterfeit money? To be insane, refuse treatment and suffer never-endingly?
Privacy is no absolute right; it is a nice little luxury when we can get it. Dignity is a
necessity to fight for. And come 2025, life will be better: not because of the technology
revolution but because of a moral rebirth that is equally inevitable and far more important.

David Gelernter is chief scientist at Mirror Worlds Technologies, art critic of the Weekly
Standard and a professor at Yale

What Will Our Skyline Look Like?
In a future that's already dawning, the boxes of modern architecture are being blown to
pieces
By RICHARD LACAYO

Dream about the future, and you dream in buildings. In the places where you first learn to
think about tomorrow--in H.G. Wells, at the World's Fair, in The Jetsons--tomorrow is first
of all a skyline fresh out of the cellophane. Personal whirly copters dart among glinting
steel towers, everything looks like the Seattle Space Needle, and nothing is crummy or
made out of wood.

One glance at the present will tell you the future is never all that futuristic. That's not a
glinting steel anything over there. It's one more plasterboard mattress outlet. And the sheer
volume of things already built means the world to come will consist largely of the world
that is already here.

All the same, architecture may be on the verge of the greatest style shift since the end of
World War II, when the glass and steel towers of bare-bones Modernism shouldered
everything else to the margins. A very different future is visible today in a small outburst of
buildings that repudiate the very notion of upright walls. Bellied-out sides, canted planes,
solid walls that look like fluttering strips of ribbon, blade-edged triangular outcroppings
and brassy materials that shimmer like something Cher would wear to the Grammys--
what's under way here is a rethinking of space and form as complete as any since the spirals
of the Baroque overtook the spare symmetries of the Renaissance. If this is the future, then
the right-angled Modernist box is about to be lowered into its grave.

For now, the most famous product of this new impulse is the Guggenheim Museum in
Bilbao, Spain. Frank Gehry's cascading structure, which has tripled tourism to funky
Bilbao, has been a watershed, an instant icon that was featured in the latest James Bond
movie and has mayors everywhere clamoring for their own "Bilbao." As a consequence,
any number of designs that once seemed too radical to imagine, much less assemble, are
being readied for construction. One of them is Daniel Libeskind's tumbling addition to the
Victoria and Albert Museum in London, which looks like a cross between a building and an
avalanche. Another is the Contemporary Arts Center in Cincinnati by the Iranian-born,
diagonally inclined British architect Zaha Hadid. Several of the most spectacular works in
progress are by Gehry, like his annex to the Corcoran Gallery of Art in Washington, a
whiplashing addition to a city where, when it comes to new architecture, you can usually
hear a pin drop.

Why this now? Maybe it is just a recognition that fractured forms are the ones best suited to
the times. This explanation appeals to a lot of architects, who are prone anyway to a kind of
Hegelian metaphysics, a sense that they are not just designing department stores and offices
but rendering the spirit of the age in steel and stone. In recent years, some of the more
theoretically inclined among them, such as Peter Eisenman and Steven Holl, have been
connecting their designs to things like French literary analysis, the kind that presumes to
dismantle the falsehoods of language, or the "chaos theory" of physics, with its universe
built on bubbling disorder. To put it mildly, these are notions that conventional buildings,
with their aura of stability and authority, don't do much to express. But a building that looks
as if it's in the grip of a spastic seizure? Well, that's getting there.

Sheer boredom with the slabs of Modernism and disenchantment with the mild remedies of
Postmodernism are other reasons that something very different is happening. Exploded
boxes appeal right away to that place in the brain that hates not just Modernism but also
modernity, the part that cheers when some atrocious housing project gets brought down
with dynamite. As it happens, Modernism itself flourished in a world that had been blown
away by the physical and psychic clearance project of World War II. Postwar corporations
wanted triumphant office towers that owed nothing to the rubble of the old world. And in
the work of Le Corbusier and Mies van der Rohe, Modernism's great pioneers, glass and
steel proved capable of being worked into something not just new but superb, beautiful
enough to bear comparison with the ornate and voluptuous past. But in less capable hands,
or on smaller budgets, the just-so geometry of Mies--architecture for Everyman!--became
architecture for the nobodies, the dreary cartons that everybody works in, drives past and
ignores.
The seriously tilted constructions that are erupting now required a technological leap,
namely the emergence of more sophisticated versions of computer-assisted design, CAD,
which make it easier to conceive and build the most complex irregular forms. When the
billowing Sydney Opera House was under construction four decades ago, it went millions
of dollars over budget because of the difficulties in translating Joern Utzon's arching sail
forms into a structure that would actually stand up. These days you could practically dash
the thing off on your Palm Pilot. Adventurous architects are working with the same
software used by aircraft engineers and special-effects designers, who also think about
things like how curved and folding surfaces respond to real pressures. For their radically
fashioned New York Presbyterian Church in Sunnyside, the Los Angeles-based architects
Greg Lynn, Doug Garofalo and Michael McInturf worked with the same program that was
used to create the raptors for Jurassic Park.

Even if a building is not sliced and shredded at the design stage, there is another way in
which the box is being overwhelmed. The standard office tower can be covered these days
with electronic signage, huge liquid-crystal display screens, Jumbotron TVs and zip strips
of news and stock quotes. Modernism forbade the use of carvings and other decorative
attachments on building surfaces. That created something like an optical vacuum that
advertising is finally moving to fill. In such places as New York City's Times Square,
billboards and electric signage are becoming the ornamentation of the information age.

The architects Robert Venturi and Denise Scott Brown say this is merely a return to the
venerable practice of covering buildings with words and pictures. Egyptian temples were a
billboard of hieroglyphics. Byzantine churches were spread over with huge gilded mosaics,
the Jumbotrons of the 10th century. "Times Square is the place of now and the place of the
future," says Venturi. "An environment that is sparkling, decorative and information-
giving." They predict that such public buildings as schools and courthouses will
increasingly borrow the features of commercial buildings, like neon signage and shop-
window fronts.

If the future brings a world of clamorous buildings, it is possible that the cool virtues of
Modernism will be rediscovered. "For me, Modernism is a way of understanding the most
direct way of making things," says Richard Meier, who designed the Getty Center in Los
Angeles. "Technology gives us the ability for a continual exploration of that question 'How
do we come down to the essence of things?'" What worries some people is that a few
decades of promiscuous creativity will clog the world with second-rate imitation Gehrys,
strenuously entertaining streetscapes and Times Squares in every town square. "The
Guggenheim in Bilbao is a unique object," says Robert Stern, dean of the Yale School of
Architecture. "Three Bilbaos in a row would be a fun house." So if the fun gets all too
tiring, maybe those simple boxes will come back again after all.

What Will Our Houses Look Like?
As space grows more precious and technology more dazzling, home, sweet home will be
changing
By WES JONES; BERNARD TSCHUMI

THE SUBURBS BY WES JONES. For better or worse, the suburbs are what America
came up with when presented with the chance to manufacture its ideal geography. Come
2025, people will still live in houses within eyeball distance of their neighbors, but the
cyberrevolution and the environmental movement promise to alter the landscape. While
computers promote a dramatic trend toward decentralization, allowing people to spread out
and live or work anywhere, the green consciousness will urge a contrasting densification, to
conserve open space. The reconciliation of these opposing trends will define the suburb of
the future. As the vastness of cyberspace increasingly satisfies the craving for more space,
the house and yard will shrink to a more supportable size; when people can find their
privacy in the virtual world, those wasteful "setbacks" between neighbors will become less
important. Cyberspace will, at the same time, become the arena for conspicuous
consumption, relieving the home and front lawn of that responsibility. Meanwhile, the
physical neighborhood will be freed for parks and other community gestures.

The cyberrevolution will have an effect inside the home as well. It will challenge the
cohesiveness of the family as children become self-sufficient citizens of the virtual world.
The home will continuously readjust itself to the family's needs. As cyberspace becomes
the kind of space that matters, the primitive territorial need for fixed rooms will fade, and
the house will be divided among specific activities rather than simply among family
members. So much for arguments among the kids over who gets the biggest bedroom.

Wes Jones is the head of Jones, Partners: Architecture, a technology-oriented design firm in
Los Angeles

THE EXTERIOR

1. Powering Up
Homes will tap energy from efficient neighborhood generators, thermal-mass cooling
ponds and solar collectors embedded in the streets

2. Energy Source
Machinery that runs the house will be powered in part by the homeowner's manual
exercise. Pedal away, and watch the dishwasher and lawn mower go!

3. Safety Features
Wheelchairs of the future will be able to climb stairs, and guard rails will be replaced with
airbags to prevent falls

4. Rooms with a View
While houses will have a Miesian simplicity, their windows will become display surfaces,
able to show vistas of deserts, jungles or urban skylines


THE INTERIOR

1. Multipurpose Space
Instead of individual rooms dedicated to specific activities such as dining or recreation, one
large room will be converted as needed, with the help of movable activity pods

2. The Family Room
As a counterpoint to the individual appliance zones, the open family room will be a
nonvirtual agora for those who crave an old-fashioned encounter with a relative

3. Work, Work, Work
Most of our work will be done not in the office but in virtual workstations at home. With a
computer screen and interface goggles, you'll be able to work anywhere in the house

4. Burgers to Go
Few people will cook. Instead their food will be delivered by the home-meal industry. The
small kitchen will mainly be where food is opened, zapped and readied for the table

5. Waste Disposal
Household refuse will be processed by a fully enclosed waste-management system, with
unregenerated bits composted and spread on the overhead lawn during mowing

6. Bedtime Stories
Bedrooms will be smaller, with a space-saving, foldout Murphy bed. Since cyberspace will
be the arena for personal display, we will have fewer personal effects to store in the closet

7. Game Boys and More
Instead of a garage crammed with speedboats, surfboards and assorted play gear, the home
will have various fold-out recreational simulators and gaming pods

8. Look! Up in the Sky!
Ceilings will be studded with videoconferencing devices; medical and security scanners;
heating, ventilation and air-conditioning sensors; and environmental regulators.

The Family Car
The ELOV--electric low-occupancy vehicle--will be the pollution-free mode of
transportation. Its size will allow more cars on the highway, and its light weight will reduce
accidents, since cars will simply bounce off one another. When extra space is needed,
another car can be attached to its side

The Neighborhood
The suburban house will combine with its neighbors in more space-efficient patterns, with
private courtyards that leave vistas wide and open from above.

My glass house in the sky satisfies the timeless desire for infinite space in the dense
metropolis. It is a reaction against the dream of suburbia; rather than abandoning the city
and re-creating an artificial urban experience outside it, the house addresses the city by
existing both within and above it. With minimal adjustments to the roofs of existing
buildings, these penthouses could be located almost anywhere, on high-rise or low-rise
buildings. They would act as illuminated beacons, celebrating domesticity and everyday
life by elevating them to the status of ephemeral monuments. The houses would also make
great observation points for the spectacle of the city below.

The architecture of the house plays on an opposition between its industrial-looking
rectangular envelope and the lush curvature inside, with velvet or silk curtains, rounded and
polished composite surfaces and translucent glass. The services and circulation of the house
are contained in an undulating sandwich wall that also helps define the living spaces. The
wall expands and folds back on itself, enclosing spaces for privacy and opening to allow
rooms to flow continuously into one another. It provides a subconscious for the house,
adjusting to the specific desires of the users. Sliding partitions and curtains can also create
room separations, allowing for more privacy.

Bathrooms are contained in a large "wet" wall that extends through the house. This wall's
surface, made of a composite of glass and resin, changes between transparency and opacity.
Running parallel to it is a "digital" wall that can be used as a projection screen, conveying
blown-up views of the occupants' everyday life. Should the residents of the house prefer
more conventional privacy, digital messages could be projected on this wall, ranging from
advertising slogans to exhibitions of the owners' video-art collection. Tired of sitting in the
living room? Don't get up; just change the picture on the wall.

Bernard Tschumi is dean of Columbia University's Graduate School of Architecture and
head of Bernard Tschumi Architects


III. Nuestro Trabajo y la Globalizacin
1. Seguir siendo importante el servicio?
En el futuro dejaremos las casa ocasionalmente y por lo tanto tendremos pocas expectativas
acerca de los seres humanos. La eficiencia se premiar siempre pero aprenderemos a vivir
sin el toque humano.

2. Cmo sern nuestras oficinas?
Parecidas a la sala de la casa. Con los avances en la conectividad, trabajaremos en la casa y
solo nos reuniremos a socializar e intercambiar opiniones a traves de la videoconferencia.

3. Cmo nos abordar la publicidad?
Virtualmente leyendo nuestras mentes. Mucho lo descubrirn en nuestros registros de
internet, por lo tanto solo recibiremos los anuncios a nuestra medida.

4. Cmo sern las guerras?
Sern mini guerras, solo un enemigo y el resto del mundo aliado a favor de la paz.

5. Regresar el socialismo?
NO, pero las organizaciones mundiales presionarn a los pases para evitar un ultra
capitalismo

6. Llegar el ndice Down Jones a los 50 mil puntos?
Los expertos no se ponen de acuerdo, algunos dicen que la tecnologa ha cambiado
radicalmente las reglas de la economa, mientras que otros aseguran que las reglas de la
economa tradicional imperarn.

7. Quin ser la primer empresa en la lista de Fortune?
Ser una empresa detallista, tal vez el Wal-mart

8. Cules sern los trabajos del futuro?
El trabajador del siglo XXI ser un multi-talentoso independiente, con aliados alrededor del
mundo y vivir de su reputacin y no de la relacin con sus jefes

9. Cules sern los trabajos ms atractivos?
Los innovadores y los reparadores. Ingenieros y Programadores genticos. Quienes trabajen
en inteligencia artificial, as como los plomeros, electricistas, mecnicos, etc.

10. Qu reemplazar la economa tecnolgica?
La economa biolgica

11. Habr alguna esperanza para los pobres?
Todo parece indicar que SI conforme avancen las naciones hacia la democracia.

12. Tendrn todos la bomba atmica?
Por tratarse de programas sumamente costosos, las naciones estarn cada vez menos
interesadas en los desarrollos nucleares con fines armamentistas

13. Ser China la potencia nmero uno?
Si los chinos continan con el paso econmico desplazaran a los Estados Unidos como la
nacin ms poderosa

14. Qu significar la paz en el medio oriente?
La estabilidad solo llegar al medio oriente cuando sus habitantes empiecen a vivir em paz
y en la democracia

15. Nos convertiremos cada uno de nosotros en una nacin?
Parece que esa es la tendencia, las naciones se seguirn fragmentando en otras ms
pequeas, como si se tratara de segmentos de mercado, con caractersticas homogneas,
mutuamente excluyentes.

16. Viviremos juntos o aislados?
El reto ser no perder las races culturales dentro de un mundo cada da ms homogneo


Will Service Still Stink?
That depends on you. Because help is fading away, being replaced bythe customer. At
least we'll be civil to ourselves
By Miss Manners

How could customer service get any worse than it is now? people just barge into restaurants
and stores, all full of themselves, and expect you to drop everything to tend to their
demands, never mind if you happen to be in the middle of a conversation, or talking on the
telephone, or listening to the radio or having a bad day. And if they don't get what they
want, or they claim there's something wrong with whatever they got before, they start
carrying on as if it were your fault or there were anything you could do about it even if you
wanted to.

Waitcustomer service, you mean as in providing service to customers? Why should
anyone expect that? Service people and their customers are becoming increasingly
indignant with one another, and they're punishing one another's rudeness by becoming
increasingly ruder back. What reason is there to suppose that the downward spiral in
behavior will reverse direction?

Analysis of specific incidentswhy the airline passenger mooned the flight attendant, why
the clerk brained the customeryields remarkably similar results. Everyone agrees that
these things should not happen. Only the question of who bears the responsibility is subject
to debate, to wit: "You started it!" "Did not!" "Did so!" "Did not!"

Daily examples serve to point out why we should probably be grateful for rudeness. The
customer who is ignored by people hired to help is at least able to leave in one piece. And
the one who storms out angrily at least leaves the help breathing.

But changed conditions could bring changes in behavior. That is the basis on which those
who deplore rudeness often voice the unseemly hope that some future economy will turn a
few shades darker. When things are bad enough, according to this antidote for antisocial
service, desperate, insecure workers will turn to politeness

That misfortune inspires good manners is not borne out by history, although one might be
able to make a case for the reverse (not that good manners bring on hardship but that good
fortune inspires bad manners). There is even evidence that rudeness may be good for
business. Inspired by the success of restaurants and nightclubs at which customers are
treated so rudely that they offer the staff bribes to ward off insult, many industries have
learned to sell ordinary service as a luxury item. Such marketing concepts as first-class
travel, executive floors in hotels, and personal shoppers in stores are based on the idea that
decent treatment and efficient service are not what the ordinary customer should expect.

Nevertheless, there have also been changes in our way of doing business that could
eventually work to improve customer service. From the modest beginning signaled by the
promise of "easy to assemble," we have trained people to pump their own gas and be
responsible for installing and fixing their own telephones. We now have "assisted self-
service," or hold for the hot line, in which someone is eventually made available to tell the
customer to look around the store to find what he wants, wait around all day for home
service or call around until a representative can be made available to tell him whom else to
call. Every day people stop waiting in line and start waiting online. At this rate, all business
activity will cease around 2015. This leaves only one option: people will soon have to
provide whatever services they require.

Help yourself, to everything.

Self-service improves service by getting rid of it. Total self-service would have the
etiquette advantages of eliminating opportunities to be rude and removing incentive for
issuing blame. More important, it could make customer service into a noble profession
serving the unfortunate. Not only will people who can't fix their own computer problems be
even more pitiful than they are now, but the ability to help them, which will require
experience in dealing with people who are actually present in the flesh, will be even rarer.

It is the general attitude toward those who serve that explains why customer-service
problems, far from being a recent phenomenon, are inherent in an egalitarian society. If
everyone is equal, why should one person wait on another? Never mind that the answer
ought to be that all of us should do that in one way or another and that public service is our
highest calling. (Public service that comes with private dining rooms is another matter.)

True, everyone knows there was a time when customer service was better than it is now,
but that was not because people used to be humbler. It was because consultants weren't
available to "fix" service problems. Consultants made people who had been reasonably
content working in customer service turn disgruntled with their lot, and deeply annoyed
people who had been reasonably content with the level of service they had been receiving.

The first such fix was "The customer is always right," innocently intended to convey the
idea that efforts should be made to please even unreasonable customers. But whoever
invented it does not seem to have considered the question of why anyone would be happy
doing a job in which he or she was condemned to be always in the wrong. Nor does this
bring out the best in the customers whom it was intended to attract with the promise of
being judged right, notwithstanding any evidence to the contrary.

The next was "friendly service." This was another well-meant concept, the idea being to
assure the customer that the service would not be surly. Those who launched it didn't
realize that the opposite of surly, in a business context, is not friendly but cheeky. What
was really meant was cheerfulness, not the license of friendship to unburden oneself, which
sometimes includes not having to keep up a cheerful front. And aside from the question of
why those friendly banks ran away leaving their friends in the care of machines, there is the
small matter of people wanting to choose their own friends.

Related to this is "personal service," meant to tailor the service to the requirements of the
individual. But as this was applied in situations in which altering the service for every
customer was not possible, it was pretty much confined to reading customers' first names
from their credit cards and reciting them back to them. Lately, however, it has come to
mean keeping track of their purchases as a way of enticing them to keep right on
purchasing.

Improved customer service will probably have to wait a decade for the realization that what
the customer wants is fairness, efficiency and privacy. Meanwhile, customers who want to
be righteous in sharing personal experiences with friendly strangers can turn to chat groups
that deal with complaints about customer service.

What Will Our Offices Look Like?
As technology becomes part of the furniture, cramped cubicles will give way to flexible
work spaces that adapt to your jobor mood
By Daniel Eisenberg

What in the hell was Bob Propst thinking? It's a reasonable question to ask as you survey
the cramped confines of your standard-issue corporate cubicle and bathe in the dull glow of
overhead fluorescent lights, all the while trying to ignore the sound of your colleagues'
clipping their fingernails or blathering away on a speakerphone.

Propst is the guy who, three decades ago, dreamed up these modular boxes for furniture
giant Herman Miller. As he envisioned it, the system of wafer-thin, movable walls would
be a revolutionary tool that would break down rigid hierarchies, spur creativity and free
work spaces from the shackles of uniformity. Unfortunately, he didn't count on the square-
foot police. Those Fortune 500 facility managers arrested his innovation and reformed it
into an impersonal, white-collar assembly line, one that can make a genuine gearhead long
for the good, old days of windowless offices and rotary phones.

With that record of innovation, workers are a bit skeptical about the office of the future.
What will the geniuses in real estate come up with in the next quarter-century? If current
trends are any indication, hide. Consider "hoteling," the latest workplace experiment, which
treats employees as though they were visiting nomads who are assigned a phone and
portable desk by a concierge. Or perhaps the "head cubicle," as imagined by Dilbert creator
Scott Adams, a square helmet that will let ceos "stack us up like firewood in a warehouse
on the outskirts of town, where rents are low."

Millions of telecommuters, of course, don't intend to wait for such an outcome. They have
already set up quarters wherever they set down their laptops. "Today's office is an aging
concept, 150 years old, that people have been hanging on to," argues Stevan Alburty, who
runs WorkVirtual, an office-consulting shop. It's only a matter of time, telecommuting true
believers claim, before city skyscrapers and suburban office parks are abandoned
altogether, left as archaeological curiosities for future generations.

Well, don't start your dig just yet. PCs may be great for solitary pursuits, composing
Powerpoint presentations or writing. But as long as co-workers need to brainstorm, bat
around ideas and just plain gossip, they will always return to the water cooler, choosing a
little face-to-face time over e-mail and the Web. Says Christine Albertini, vice president of
advanced concepts at office-furniture maker Steelcase: "The basic nature of work is social."

Clueless corporations, which have typically approached the office as a storage site for
people and paper, are only just starting to think outside the cubicle, imagining work spaces
that foster interaction, not isolation. By 2025, though, the standard-issue, gloomy maze of
hallways and bullpens of today may well be replacedonce they have been fully
depreciated, that isby a wide range of office setups that, just like the new economy, stress
customization over mass appeal. In this newfangled, dynamic working environment,
employees should be able to personalize their work spaces and constantly reconfigure their
surroundings to suit the changing needs of business.

"Think of the buildings as stage sets, where you can play out any technological or
organizational scheme," muses Volker Hartkopf, professor of architecture at Carnegie
Mellon University.

So what might this workers' paradise look and feel like? Well, for starters, technology will
be "invisible but unavoidable," as Bob Arko of industrial designer IDEO puts it. The
tangled cables that snake through every office, for instance, should disappear, replaced by
wireless systems that zap voice, data and video through the air. Smart materials could make
any surface or gadget feel like wood one day and metal the next. Intelligent chairs might
conform perfectly to your posture, giving you a much needed back rub in the process.
Embedded systems and biometric, body-sensing technology will enable every piece of
hardware, from cell phones and pdas to PCs, to know exactly who you are and where, as
well as to communicate with every other piece.

"You'll walk into the building like you own the place," says Mark Smith, manager of
appliance platforms at H-P Labs. When you arrive at work, you could simply stroll through
a secure, smart door and listen as your desktop virtual assistant reads aloud your schedule
for the day. The temperature and lighting will adjust automatically to your preferences.
Though we probably won't attain the mythical paperless office, there will likely be less of
the messy stuff lying about, thanks to high-tech, rewritable parchment. And forget about
typing: sophisticated voice recognition will let you tell your PC what to do (though all that
yakking could just as easily make you hoarse).

The harsh right angles and rigid grid layout so despised by hapless cubicle-ites are also
likely to vanish. In their place, workers might find themselves in a tentlike structure with a
retractable roof, pitched right in the middle of a vast, open commons area. Screens
stretching from poles could shift from transparent to opaque, depending on your mood and
need for privacy. Don't worry about the noise from your next-door neighbor; acoustics
technology can block that out. And don't fret about fighting for a windowed office either;
with walls of flat-screen monitors raining down images and data from all directions, you
will be able to enjoy any number of stunning virtual views from your cockpit. To chat with
a co-worker a continent away, just call him or her on a lifelike, 3-D video-conferencing
system. If you need to get busy on a project with a few of your colleagues, simply fold up
your movable workstation and roll over to them. You won't have to knock. "We'll blur the
line between furniture and technology," says Rick Duffy, director of the knowledge-
resource group at Herman Miller. "Instead of building walls of metal and wood, what if
they inflated with air or water?"

We won't hold our breath for that one. Just as important as personal space, though, will be
group space. Rather than a couple of conference rooms decked out with imposing
mahogany tables, picture multiple areas for groups to convene and collaboratefrom
indoor gardens, playgrounds and cafes to what designers term contemplative caves. Even
the lowly office kitchenette might be wired by 2025. Say you're having a spirited debate
with a colleague about a pitch to a prospective client just as you're grabbing a cup of joe.
By 2025, according to John Seely, director of Xerox's Palo Alto Research Center, you
should be able to expand the conversation right there on digital whiteboards that line the
walls and then have your ideas instantly e-mailed to your computer.

The boss, mind you, will probably be clued in to your little chat as well. Privacy, as we all
know well, is rapidly eroding in the workplace, and the situation only stands to get worse.
From reading employees' e-mail to tracking their Web surfing, more corporations are
keeping a close eye on their human capital. In another quarter-century, we will probably be
forced to carry badges that let our superiors know where we are at all times, from the
bathroom to the vending machines. Then again, crafty folks who want to spend the day at
the movies might just fashion counterfeit badges and have colleagues pass them around to
throw security off the trail. "It could be the greatest boon to goofing off ever," says Adams.
After all, as Bob Propst learned when he tried to build the flexible office more than 30
years ago, workplace innovations don't always work out as planned.

How Will Advertisers Reach Us?
In every way, and in every place possible, says one of America's great ad creators. But
here's his soft sell: you may even like it
By Jay Chiat

It's super bowl LIV in 2020. Record-setting numbers of viewers are tuned in to watch the
game, but not on television and not over the Internet. Instead they are using handheld
broadband devices that allow them to project the transmission onto any flat surface. And in
2020, just as today, viewers are interested in the game, but they're even more interested in
the advertising.

The commercials, of course, are greatsurprisingly better than they are now. Directors
make sure the commercials are moving, exciting, entertaining; research and planning make
sure they are relevant; technicians make sure the effects are breathtaking.

It's not the commercials that are the most interesting part, though: the really important
advertising is hiding in plain sight on the field. The Microsoft Mustangs are playing the
GM Generals at Cisco Stadium in a town called Ciscovilleformerly known as
Philadelphia. Corporations will pay big money for the right to digitize logos onto the T
shirts of the fans in the stands. Logos of sponsors won't be painted on stadium signs or on
the field anymore. Thanks to a trend that is already happening, they'll be digitally
embedded in the image on your screen. The logos you see will depend on your personal
interests and profile, and they'll be different from the ones seen by your next-door
neighbors.

Advertising will change profoundly over the next couple of decades, although there's a
good chance you won't notice the difference, since the most meaningful changes won't be
visible to the casual observer. It's the changes that are happening underground that will
count, and they're the ones we should be aware of. Advertising in the future will be
surgically, stealthily and eerily targeted, and disturbingly omnipresent.

Technology, naturally, will be the engine. User-tracking software that records your TV- and
Internet-viewing habits in minute detailand crosses it with your purchasing historywill
allow the advertiser to know that you have children, that you eat meat, that your native
language is Spanish and that your dishwasher is however many years old. That way you
will be shown commercials for minivans, cheeseburgers and replacement dishwashers, all
in Spanish, and not for roadsters, tofu and replacement refrigerators, in English. (In fact,
this technology already exists.) Refined with data that track what kinds of online ads you
tend to click onfunny, sentimental, fact ladenevery commercial will hit home.

Say what you will, that's a nifty trick. In the future people won't be bothered with
advertising messages irrelevant to them. They'll tend to like advertising better because it's
so carefully tailored to their tastes. It will begin to feel less like an intrusion. This works for
the advertiser too because fewer dollars will be wasted. While it's a little dispiriting to think
we can be so predictably manipulated, maybe that's a fair price to pay to avoid the pollution
of messages you don't care about.

Nevertheless, it seems clear that the advertising outlets that exist todayTV and radio
commercials, print ads, billboards and taxi topswill not be plentiful enough to
accommodate all the commercial messages that are agitating to get out. Advertising will
therefore necessarily slip beyond the boundaries of the 30-second commercial and the full-
page ad and migrate to the rest of the world, including entertainment, journalism and art..

You can glimpse the future now. Product placement in movies is an obvious instance of
where advertising has slipped outside its traditional container into entertainment. MTVan
entertainment medium designed expressly to sell recordsis another classic example.
Every time a rapper mentions a brand of anything in a song, advertising slips into art. If you
have a Harley-Davidson tattoo, you're there already. If you wear a T shirt with a logo on it,
you're also there but with less pain. Eventually, every surface that can display a message
will be appropriated for advertising.
A backlash is inevitable. Perhaps people will pay a premium to live in "advertising-free
zones," just as, perhaps, they will be willing to pay a premium to live in cell phone-free
zones.

The Internet will accelerate the phenomenon. The browser page and the LCD screen on
your cell phone or your PalmPilot are still contested territories, allowing new relationships
among the different kinds of content that appear there. The Pampers website provides
parenting information and adviceand, presumably, not the kind that the Pampers people
wouldn't want you to see. In a less obvious kind of relationship, marketing execs can enter
chat rooms under assumed names and praise their company's product, service or stock:
that's just advertising masquerading as conversation. And more directly, consumers are
excited about an emerging technology that allows them to click and buy as they consume
journalism or entertainmentsay, to buy Frasier's couch as they watch the show. That's not
entertainment with advertising: that's entertainment as advertising.

People get very nervous when they see the line blurring between advertising and other
forms of content; they think advertising is some kind of infection that pollutes the purity of
art, ruins the objectivity of editorial and distracts from the pleasure of entertainment. As
usual, however, people are nervous about the wrong thing. Consumers are smart and
perfectly aware when they're being sold; surely parents who go to the Pampers site are
happy to find worthwhile information there and are capable of distinguishing between a
commercial message and an editorial one. Art and journalism, until they became
pretentious in the late 20th century, always relied on direct subsidy from private sources.
Don't think for a minute that commercial interests didn't enter into it.

The only genuinely disturbing aspect of the ubiquity of advertisingthe real reason to get
nervousis that it has begun to supplant what was formerly civic and public. There's no
Candlestick Park anymore, just 3Com Park, and now there's a PacBell Park to match. The
venerable Boston Garden was replaced not too long ago by the Fleet Center: a city erased,
its role played by a bank. A little town in the Pacific Northwest just renamed itself after a
dotcom company in return for a generous donation. I won't mention the name here, since I
figure advertising should be paid for. That's when advertising has gone too far: when it's
become something we are, rather than something we see.

How Will We Fight?
War won't get any less lethal in the next century, says the general who led the fight in
Kosovo. And our battles will become even more complex
By General Wesley Clark

Throughout history, the world's military have prepared for the next war guided by how the
last war was fought. In the face of exploding technological advances in weaponry and
communications, shifting international political and economic power, and the rise of
challenges such as terrorism and international crime, is the last war a true guide or a risky
diversion?

As NATO aircraft lit the sky over Yugoslavia at the start of Operation Allied Force last
year, NATO's political leaders were issuing press announcements stating that "NATO is not
at war with Yugoslavia." Tell that to the pilots who flew night and day against the missiles
and antiaircraft fire over Serbia and who saw the effects of the ethnic cleansing on the
ground!

Operation Allied Force was a campaign extending over 78 days and involving more than
900 aircraft, hundreds of cruise missiles, four aircraft carriers and more than a dozen other
surface ships and submarines. Their mission was to use air power to halt or diminish a
systematic campaign of ethnic cleansing being carried out by more than 50,000 Serb
military, police and paramilitary against 1 1/2 million virtually defenseless ethnic
Albanians. More than 250 fixed targets were attacked, including airfields, communications,
fuel depots, and military and police headquarters. More than 1,000 strikes were conducted
against enemy forces in Kosovo.

But it was not officially a war. Many peacetime legal and political restrictions remained in
effect. Governments strictly limited the pace of the action, the types of weapons that could
be used and the targets that could be struck. The targets attacked were often in the midst of
civilians, some of whom were the very people we were trying to help. We looked at each
target very carefully before we struck, and we used precision weaponslaser-, gps- or TV-
guided bombs and missilesas often as possible to limit risks to innocent civilians.

We reduced the risks to our own pilots by using high-technology aircraft, including the
Stealth bomber, and unmanned aerial vehicles, along with other means to frustrate the
enemy's air defenses. Video-teleconferencing and virtual-intelligence centers created by
using a secret, high-capacity Internet helped keep up with the detailed top-down guidance
and changing politics directed by an alliance of 19 sovereign statesall as the world's
media looked on. After 78 days, the opposing leader gave in to NATO's demands, without a
single NATO ground soldier having to fight his way into Serbia.

Is this the future of warfare? Not necessarily. But many features of this conflict will
reappear. Operation Allied Force was one step in the continuing evolution of conflict. A
look at the pattern of this evolution shows what the future may have in store.

The pattern really began with Napoleon and forces unleashed during the French
Revolution. By mobilizing the full resources of France, Napoleon created a new approach
to warfare. Young people were drafted, centralized administrative structures were created,
and arms production was expanded and standardized. In this new approach the stakes were
high. Defeat meant the loss of the state and its territories, as a number of European
monarchs discovered.

But in warfare there is a response for almost any development. Other European states
adopted the French military model and, working together, defeated Napoleon. The forces of
industrial-age war had been unleashed. As noted by Karl von Clausewitz, the foremost
military observer of the era, war in theory knows no limits and thus tends naturally to the
extreme application of violence.

Clausewitz got it right. Using the full powers of government, nation-states applied
scientific advances to increase the destructive powers of their forces. With improved
organization and weaponry, 20th century wars killed tens of millions of combatants and
civilians. And the march of science and technology continues. World War II forces look
pale in comparison to the tanks, armed helicopters, automatic cannon, aircraft and
precision-strike capabilities available today, to say nothing of chemical, biological and
nuclear weapons.

As the destructive potential of conflict has grown, the political efforts to prevent and
restrict it have intensified. Especially in Europe and among the major nuclear powers, a
variety of legal and political measures have been established, ranging from arms limitations
to special procedures for dialogue. These efforts have reinforced the strategic deterrence
underwritten by the major powers' nuclear weaponry.

And the same technologies that have made war so lethal also operate to prevent or restrict it
by shrinking the distance and differences between peoples and nations. The "global village"
creates opportunities for greater mutual understanding and an identity as a larger
community, and reduces the acceptance of violence and aggression. In practice this means
that war is increasingly constrained by law and the public's assessment of just causes and
acceptable costs.

Operation Allied Force was this sort of campaign, undertaken as a last resort in pursuit of
human rights and regional stability. NATO acted reluctantly against a government
perpetrating a great wrong against its own people. It was a modern-day European conflict,
reflecting, at least on the allied side, all the inhibitions of two centuries of effort to limit
warfare as well as a growing spirit of international community. As a French pilot remarked
during the operation, "We don't want to bomb these bridges over the Danube or hurt the
Serb people. They are our European brothers. And who will have to pay to rebuild the
bridges?"

The tight political constraints of Operation Allied Force and the preoccupation with
avoiding casualties and risks were derived from its purposeto support human rights and
promote regional stabilityand the fact that NATO member nations' survival interests
were not at stake.

In this era of globalization there may be other such conflicts ahead. These will be minimal-
risk campaigns, emphasizing aerospace power or ships at sea to threaten precision strikes
from long range, with small, stealthy unmanned vehicles to collect information and deliver
firepower, and they will be controlled by distant leaders using virtual command
technologies. Even better, if we have the capability, will be cyberwar to scramble an
enemy's military command or disrupt electricity systems without bloodshed.

But potential adversaries are carefully studying the lessons of Allied Force, and they will
no doubt reduce their vulnerabilities while seeking some advantage over us. Expect
improved air defenses and better air cover, camouflage and deceptionmore military
assets hidden in hotels and schoolhouses or buried deep for protection. And these states will
be seeking their own deterrent in the form of longer-range missiles with chemical or
biological warheads as well as terrorist and cyberattack capabilities. Pentagon strategists
call that kind of warfare asymmetricwhich means that even small nations with the right
weapons and technologies will be able to pose very real security threats to big powers such
as the U.S.

In these circumstances, or when the vital interests of Western nations are at stake, the
restraints of Operation Allied Force may not apply. Despite their aversion to risks and
casualties, most Western nations will, if necessary, be politically capable of waging intense,
highly destructive warfare. Expect nations to use most of the weapons at their disposal,
including ground forces, and to take greater risks.

There may be deep helicopter-borne assaults, fierce night attacks and sweeping armored
maneuvers, all supported by highly lethal area weapons and long-range precision-kill
capabilities. Political constraints will be relaxed, and control of the actions will be
increasingly delegated to on-scene commanders. Unfortunately, significantly greater
casualty rates can be expected among both military and civilian populations.

There will be other requirements for ground troops too, such as the lengthy and difficult
peace-support missions in the Balkans. These may not seem like war, but the troops are
armed and ready. They prefer to use presence, maneuver and intimidation to accomplish
their purpose rather than active combat. But they can act, and they sometimes must defend
themselves.

And technology will give opportunities to the so-called non-state actors, the terrorists and
international organized-crime cartels. Equipped with highly sophisticated weaponry and
communications, they will be dangerous, small-scale adversaries for modern military
forces. This will be a long-term struggle in the shadows, marked occasionally by long-
range missile strikes or pre-emptive actions by elite special forces.

In the future we will seldom fight alone; we must be able to operate with our allies. Land
and naval forces will be needed as well as aerospace power, and all must be able to work
jointly. We will still require a nuclear deterrent. Our armed forces must have the latest in
technology and be agile enough to use it to achieve their assigned objectives within the
directed political constraints. But, above all, we will still need talented, resourceful and
courageous men and women to fight and direct our military actions.

Will Socialism Make a Comeback?
Khrushchev's furious claim that socialism would "bury" the West now looks like a joke.
Yet the rumbling of antiglobalists shows that egalitarian sentiment isn't dead, just diffused
By Francis Fukuyama

If socialism signifies a political and economic system in which the government controls a
large part of the economy and redistributes wealth to produce social equality, then I think it
is safe to say the likelihood of its making a comeback anytime in the next generation is
close to zero. But the egalitarian political impulse to constrain the power of the wealthy in
the interests of the weak and marginal remains strong and is already making a comeback.
There are good reasons for thinking this impulse will not lead to new radical groups'
achieving political power and implementing a coherent political agenda. Though, in the
process of trying to influence the course of events, the global left may invent an entirely
new form of governance that will act as a strong brake on multinational corporations and
the governments that serve their interests.

Let's begin with the reasons why the economic system we called socialism back in the 20th
century is unlikely ever to return. Today it's a cliche to say that socialism didn't work, that it
produced a society in which, as the Soviets used to joke, they pretend to pay us and we
pretend to work. In fact, socialism did work at one period in history: during the 1930s, and
again in the '50s and '60s, socialist economies like that of the U.S.S.R. grew faster than
their capitalist counterparts. But they stopped working sometime during the 1970s and '80s,
just as Western capitalist societies were beginning to enter what we now call the
information age.

There is one basic explanation for this. As the libertarian economist Friedrich von Hayek
once pointed out, the bulk of information generated in any economy is local in nature. If
this local information has to be processed through a centralized hierarchywhether
government ministry or even overly large corporate bureaucracyit will inevitably be
delayed, distorted and manipulated in ways that would not happen in a more decentralized
economic-decision-making system. The U.S.S.R. used to have an office called the State
Committee on Prices, where a few hundred bureaucrats would sit around setting every price
in the Soviet economy. Imagine how well the U.S. economy would work if every price for
every product had to be determined in Washingtonin an economy in which a single
Boeing 777 airliner can have as many as 3 million separate parts, each with its own price!

As an information economy becomes more complex, more technology intensive and
demanding of ever higher levels of skill, it is no surprise that decentralized decision
makingwhat we otherwise call a market economytakes over from central planning. But
there is another factor at work as well: globalization, along with the information-technology
revolution that underpins it. A country that decides to opt for a heavy-handed, government-
controlled economy will find itself falling further and further behind countries that are
economically freer. Formerly, it was possible for socialist countries to close themselves off
from the rest of the world, content that they had achieved social justice even if their
economies appeared to be stagnating. But with more information, your citizens simply
know too much about the living standards, culture and alternative approaches of other
societies. Since the world is not likely to get less complex and technological in the future,
there is no reason to think that top-down, command-and-control methods are going to work
any better than in the past.

But the impulse toward social equality has not disappeared. Those who may have been
tempted to believe it has disappeared in our Everyman-is-a-stockholder age received a jolt
at the Seattle meeting of the World Trade Organization late last year, and at the World
Bank-IMF meetings in Washington in April. The left may have gone into momentary
hibernation after the fall of the Berlin Wall, but it never disappeared, and it is now re-
energized by an enemy called globalization.

There is plenty about our present globalized economic system that should trouble not just
aging radicals but ordinary people as well. A financial panic starting in distant money
centers can cause you, through no fault of your own, to lose your job, as happened to
millions of people during the Asian financial crisis of 1997. Modern capitalists can move
their money in and out of different countries around the world at the speed of a mouse
click. Democratic countries find that their options for political choicewhether in the
realm of social policy, economic regulation or cultureare curtailed by the increased
mobility of financial capital and information. Do you want to extend your social safety net
a bit further? The faceless bond market will zap your country's interest rates. Do you want
to prevent your airwaves from being taken over by Howard Stern or Baywatch? Can't do it,
because the world of information is inherently borderless. Do you want to pass a law to
protect endangered species in your own country? A group of faceless bureaucrats in the
WTO may declare it a barrier to trade. And all this is true in boom times like the present
think of how people will regard global capitalism during the next economic downturn!

So the sources of grievance against the capitalist world order are still there and increasingly
powerful. The question is, What form will the backlash against globalization take?

It is clear that socialism cannot be rebuilt in a single country. Workers pushing too hard for
higher wages in Michigan will simply see their jobs disappear to Guadalajara or Penang.
Only if all workers around the world were unionized, pushing simultaneously for a global
rise in wages, would companies be unable to play off one group of workers against another.
Karl Marx's exhortation "Workers of the world, unite!" has never seemed more apt.

In theory, then, what the left needs today is a Fourth International uniting the poor and
dispossessed around the world in an organization that would be as global as the
multinational corporations and financial institutions they face. This Fourth International
could push for powerful new institutions to constrain global capitalism. One analogy is the
Progressive Era in the early 20th century, when labor unions began to mobilize and the U.S.
government developed regulatory powers to catch up with the reach of such powerful
corporations as Ford and Standard Oil.

The shortest route to quasi-world government based on socialist principles is for the left to
take over the WTO and use it to promote labor rights and the environment rather than free
trade. But the left in the developed world finds opposition to this project from poor
countries themselves. The WTO is a rather weak organization as it is, dependent upon
consensus among its members, and the effort to use it to promote political causes may mark
its demise.

Beyond the WTO, it is hard to see how the left will agree on, much less create, new
political institutions on a global scale, given the huge differences in interests and culture
separating the various groups involved. The coalition represented in Seattle and
Washington is very fragile and internally dividedthe AFL-CIO will turn on dolphins or
sea turtles the moment one of these creatures threatens the job of a unionized worker. While
American unions pay lip service to the interests of workers in China, they actually feel
themselves in direct competition with the Chinese for the same low-skill jobs. The inability
to organize at an international level leads an important part of the left down the road toward
protectionism and the safeguarding of American wages and the environment through
actions like opposition to the North American Free Trade Agreement and to China's entry
into the WTO.

So where will the socialist impulse lead? Perhaps if it cannot create formal instruments of
power, it may invent an entirely new form of governance that might be called government
by NGO, or noNGOvernmental organization (contradictory as this may sound). In the
recent past, the giant multinational Royal Dutch Shell was forced to back down from
important projects in Nigeria and the North Sea as a result of pressure from environmental
groups like Greenpeace. NGOswhich are loose affiliations of people based on special
interests such as environmentalismhave shown that even if they cannot create institutions
that anyone would label socialist, they do have the power to constrain companies and
governments from taking actions that harm the interests of the poor and the environment.
There is a huge variety and density of such third-sector groups in the world today,
benefiting from the same inexpensive information technologies as global corporations.

Government-by-NGO is a long way from anything we recognize as socialism. But the
world has changed, and the requirements for effective political action are different today
than they were in the 20th century. So while classical socialism may never make a
comeback, the impulse underlying it is in the process of leading the world to unfamiliar
forms of interaction between left and right. In this respect, Seattle and Washington may be
harbingers of things to come.

Will the Dow Ever Hit 50,000?
It's totally logical, say economist Kevin Hassett and journalist James Glassman, who argue
in Dow 36,000 that stocks are both safe and undervalued. Not so fast, says Yale economist
Robert Shiller, whose Irrational Exuberance says the market is headed for decades of
trouble. The two sides had it out in a Time debate.

Glassman Stocks have returned an annual average of more than 11% for the past 70 years.
And, in fact, really for the past 200 years. I think that's enormously important for people to
understand. It is a tremendous return compared with the alternative, bonds.

We say emphatically that stocks are undervalued today and that people could hold them and
feel comfortable. You can use different assumptions and get a 55,000 Dow, but we are very
comfortable with 36,000. And, in fact, most people don't have enough stocks in their
portfolio.

You say exactly the opposite. You say people ought to dump the stocks that are in their
portfolio because we are headed for a terrible crash, which is something you have been
saying now for four years.

Shiller I didn't say "terrible crash." That is one plausible scenario, but other plausible
scenarios are that [the Dow] will go up for a while and then just kind of hover for many
years and give a low return. My book uses a 10-year horizon, which is viewed as the really
long term. Ten to 20 years, it looks like we will have poor runs, maybe in the negative for
20 yearsthere is a good chance of that. In periods of high P/Es [price-to-earnings ratios, a
measure of how expensive stocks are], stocks have done poorly subsequently. Your
argument is that people have learned something and that historical data are no longer
relevant. But that is what the historical data say.

Hassett There is no question that if you run through history, Bob is rightthat over some
periods following peaks in the P/E, there were some bad times. My problem with that
exercise is that if you took anything, any metric of how the market is doing, and calculated
the average over time, then, of course, when you are above it, then you go down, and when
you are below it, you go up.

Shiller Not necessarily, not with the P/E ratio. You are suggesting there is some spurious
some fallacy in that. There isn't a fallacy here. I look at past historical periods when we had
similar leveling, and you know, what comes to mind is 1929. We have had a tripling of the
stock market to a record high level in the past five years, and there is only one other time
when that has happened, which was '24 to '29. So history doesn't encourage me to think
people have suddenly learned something.

Glassman I just think people should know that in 1990 the Dow was at 2,600. In 1995 it
[finished] at 5,100. Now, if you are Bob, you would say, "Whoa, we are about to have
another crash like 1929." What has it done since the end of 1995? We are now, as we sit
here, above 11,000 [the Dow closed at 10,609 last week], so I don't think it is predictive to
say simply that if the Dow triples, it is going to crash. In the past 18 years, the Dow has
risen by a factor of 14. What we're saying is that something profound is going on.

Shiller We agree on that.

Glassman O.K., something is changing. What is changing is that people are at last
beginning to act in a rational way. They are bidding up the prices of stocks because, my
gosh, stocks in the long term are no riskier than bonds ... They have learned something
about stocks, which I think Bob would admit and we certainly believe is true. For all these
years, they have acted in a kind of irrational way. Now they have learned something, which
happens to be true, which is that in the long term, stocks are not all that volatile, and they
return more than bonds.

Shiller People believe stocks are safe. But I don't think this represents learning. It suggests
this is what we call a speculative bubble. When stock prices go upand they have been
consistently for a whilepeople get a feeling that they must go up. So to me it is very clear
what has been happening in recent years. It is not like people have just taken a course in
economics and were impressed with the data. That is not what happened. We have seen so
many cases in history of speculative bubbles. And we should learn now to recognize their
elements. The essential element is that there is a "new-era theory," which becomes
promulgated more if there are also stock-market price increases.
Hassett No. People have learned the right strategy. They have learned to buy and hold for
the long term.

Shiller See, [your] book title, if you look at it, it says, The New Strategy for Profiting from
the Coming Rise in the Stock Market. You are selling books by telling people the risk
premium is going to go to zero in three to five years. And you present no evidence for that.

Hassett The equity risk premium has declined. Let's define this, by the way. Equity risk
premium is the extra return that investors demand because they think stocks are riskier than
the benchmark [30-year Treasury] bond. O.K., it has declined roughly from 7% to 3%. [A
decline drives up prices.] Alan Greenspan says, "The question is, Is this decline temporary
or is it permanent?" We offer a third alternative we think is a reasonable alternative, which
is, Will it continue [to decline] to what we believe to be its reasonable resting place, which
is around zero? If stocks and bonds are equally risky over the long term, there should be no
equity risk premium. It should be close to zero.

Shiller It is not reasonable that the risk premium should be zero. You can buy inflation
indexed bonds at 4% [risk free]. That's why I am a big advocate of index bonds. It is one of
my campaigns.

Hassett I took a huge loss on my index bonds. I bought them like three or four years ago,
and the rate was down like 12%.

Shiller That's short term.

Hassett But what you are saying is that those index bonds are not risky if you hold them for
30 years. Well, that's true for stocks.

Shiller But consider one 20-year period, which was '29 to '49, [the return] was just about
zero. If you look at a broad indication of the world and history, it has been at times very
risky, and it certainly has the potential to be risky in the next century, which is the future.
Not the past.

Glassman We say stocks will become fully valued at roughly 36,000, and then, as Kevin
says, returns will trail off after that, when they are truly fully valued. So I think that is
important for people to remember. If we are wrong about the lump sum up front, then
maybe what will happen will be it will continue the way it has been over the past 70 years.
At any rate, 11% is a lot better return than 5.5%. pi

Who Will Top the Fortune 500?
New entrants can't be discounted, but look for a big retailer to top-dog for years

In a business world so turbulent it feels like one long spin on the Humunga Cowabunga at
the water park, an important fact has remained virtually unchanged for decades: who's
biggest. Not which company is this quarter's most valuable or most profitable, but who's
biggest, the one with more good-old-fashioned sales than any other. Nice to know some
things never change, isn't it? Well, it's all about to change.

The most famous thing we do at Fortune is publish the definitive list of America's largest
companies, the Fortune 500. And since the directory's beginning in 1955, the No. 1
company has almost always been the same: General Motors. Sure, a couple of times in the
late 1970s and early '80s, Exxon floated to the top when oil prices spiked. Oil and Exxon
receded; GM motored on. America's biggest is also the world's biggest: GM leads our
Global 500 too.

Despite GM's amazing record, we can be quite confident that the company will not lead the
Fortune 500 we publish next April. Barring calamity or another jump in oil prices, the new
champ will be Wal-Mart, already the world's largest retailer by a mile. Like some cyborg
athlete, Wal-Mart moves at a pace that would kill most of its competitors and somehow
keeps it up while growing bulkier.

Now let's think really big. In the next 10 or 20 years, what company could become so
mammoth we'd look back and chuckle at the notion that we actually once considered GM
or Wal-Mart a sizable outfit? It's possible to imagine a few candidatesbut you have to
crank your imagination surprisingly hard. Here's why: as the world's largest company, Wal-
Mart this fiscal year will take in close to $200 billion in revenue. Think for a minute about
how much money that is. If you sold something expensive, like cars averaging $20,000,
you'd have to sell 10 million a year every year to reach it. If you could somehow sell
something to every man, woman and child on earth, that something would have to cost $33,
which is more than many of them could afford. And they'd have to buy it again every year.

Surely tomorrow's giants will come from the sectors that are revolutionizing business, no?
They well may. But remember the stupendous scale we're talking about: combining IBM,
Microsoft, Intel and Cisco, for example, wouldn't even come close to hitting our $200
billion mark. That fact points up a hard truth about corporate size. Infotechor
telecommunications or entertainmentmay well be the world's largest industry in coming
decades, but that doesn't mean it will harbor the world's largest company.

The greatest wild card, the sector with the vastest potential and murkiest future, is biotech.
Developments that will cure cancer and extend human life beyond age 150 will arrive in
this century. If one enterprise were to commercialize these developments in some
proprietary way, then it's easy to imagine that firm's becoming the world's largest by far.
But these are matters of life and death, so it's just as easy to imagine political pressures
preventing biotech from spawning the globe's biggest company.

I hate to sound unimaginative, but you know who's easy to picture as the world's largest
business 10 years from now, maybe even 20? Wal-Mart. It's been growing around 20% a
year, and while extrapolation is always hazardous, if you're at $200 billion a year, growing
20% annuallyor even 10%you are extremely hard to catch. Wal-Mart is expanding
aggressively around the world, as it must. Most important, it owns by far the most
advanced back-end infotech systemfor managing inventory, logistics, working capital,
customer datain retailing. Most people wouldn't suspect it, but Wal-Mart is one of the
world's most advanced e-companies.

Business will only get more insanely turbulent. But after this year, we won't be worrying
about who's biggest for a while.

What Will We Do For Work?
Drastic change is afoot. You'll have to be flexible and upgradable, but you may actually
enjoy what you're doing

I believe that 90% of white-collar jobs in the U.S. will be either destroyed or altered beyond
recognition in the next 10 to 15 years. That's a catastrophic prediction, given that 90% of us
are engaged in white-collar work of one sort or another. Even most manufacturing jobs
these days are connected to such white-collar services as finance, human resources and
engineering.

I talked to an old London dockhand some time back. He allowed as how in 1970 it took 108
guys about five days to unload a timber ship. Then came containerization. The comparable
task today takes eight folks one day. That is, a 98.5% reduction in man-days, from 540 total
to just eight.

This time the productivity tool kit aims, belatedly, to reconstructmake that deconstruct
the white-collar world. In fact, I see a five-sided pincer movement that will bring to fruition
my apparently bizarre "90% in 10 years" prognostication.

FIRST The destructive nature of the current flavor of competition, dotcoms. Sure, most will
fail. But the survivors will exert enormous pressurefast!on the Big Guys. When an
Amazon or a Charles Schwab moves into your neighborhood, you've got moments to react.
Or take king entrepreneur Jim Clark of Netscape fame. His latest venture,
Healtheon/WebMD, intends to squeeze hundreds of billions of dollars of waste out of the
health-care system. These new firms aim to create nothing less than havoc in the theaters in
which they operate.

SECOND Enterprise software. It's a jargony name for the tools that will hook up every
aspect of a business's innardspersonnel, production, sales, accountingand then hook up
all that hooked-up stuff to the rest of the "family" of suppliers and the suppliers' suppliers
and wholesalers and retailers and end users.

They are your nightmare, these "white-collar robots." The complex products from German
software giant SAP will do to your company's innards exactly what forklifts and robots and
containerization did to the blue-collar world circa 1960. Installing these tools is not easy.
The technical part is harrowing; the politics are horrendous. When the blue-collar robots
arrived, the unions raised hell. This time it's management bureaucrats who are turning
Luddite. Why? These tools threaten their cozy baronies, carefully crafted over several
generations.

But the robots did come. And they triumphed.

THIRD Outsourcing. M.I.T.'s No. 1 computer guru, Michael Dertouzos, said India could
easily boost its GDP by a trillion dollars in the next few years performing backroom white-
collar tasks for Western companies. He guessed that 50 million jobs from the white-collar
West could go south to India, whose population hit 1 billion last week. The average annual
salary for each of those 50 million new Indian workers: $20,000.

FOURTH The Web. Ford, GM and DaimlerChrysler announce a rare hookup. They will
link all their tens of thousands of suppliers into a single, Internet-based network. This entity
will encompass $250 billion annually of suppliers' products (and perhaps an additional
$500 billion of those suppliers' suppliers' products). In short, every penny of waste will be
wrung from the mammoth procurement system. The order cycle will speed up dramatically.
Medibuy aims for the same hat trick in medical supplies, DigitalThink in training,
CarStation in the auto-body-shop world. This is the white-hot world of B2B (business to
business) electronic commerce, which will soon encompass trillions upon trillions of
dollars in transactions.

FIFTH Time compression. It took 37 years for the radio to get to 50 million homes. The
Web got there in four. Hence my belief that while it took about a century to revolutionize
blue-collar job practices, this brave new white-collar regime will be mostly installed in a
tenth of that time10 years.

Each of these five forces is fact, not supposition. Each influences the others
multiplicatively. Therefore my unwillingness to back off my predictions about the power of
the white-collar tsunami bearing down on us. Unsettling madness is afoot. Especially if I'm
a 48-year-old white-collar staff member or middle manager entombed in a corporate tower
in Manhattan or Miami or Milan.

Yet these forces are liberating. Blue-collar robots took the grunt work out of factory and
warehouse and dockside. The same will happen to white-collar work. Just as workin' the
line at U.S. Steel was no walk in the park in 1946, passing papers in the tower is no great
joy. My dad did it for 41 years at the Baltimore Gas & Electric Co. He was, sad to say, a
white-collar indentured servant.

The world is going through more fundamental change than it has in hundreds, perhaps
thousands of years. The head economist at Sandia National Laboratories, Arnold Baker,
said it's the "biggest change since the cavemen began bartering." Do you want to be a
player, a full-scale participant who embraces change? Here is the opportunity to participate
in the lovely, messy playground called "Let's reinvent the world."

Here's a new role model I call Icon Woman:
She is turned on by her work!
The work matters!
The work is cool!
She is "in your face"!
She is an adventurer! She is the ceo of her life!
She is not God.
She is not the Bionic Woman.
She is determined to make a difference! (Dilbert would be appalled, no doubt.)
My Icon Woman, of course, embraces and exploits the Web.
She submits her resume on the Web and keeps it perpetually active there.
She is recruited and negotiates and is hired on the Web.
She is trained on the Web.
She creates and conducts scintillating projects on the Web via a far-flung "virtual" stable of
teammates (most of whom she's never met).
She manages her career and reputation-building efforts on the Web. And she has a fab
personal website!
But whatexactly?will she actually do?

Circa 2010. She will be at home. Workingfor the next several monthsfor Ford on a
fiendishly difficult engineering problem. She won't be on Ford's payroll, though she will be
drawing full benefits, even as a contractor. (During President Hillary Rodham's second
term, health care, pensions and retraining will no longer be tied to a company but to the
individual.) Her 79-member project team, only one of whom she's met face-to-face (she
considers face-to-face a quaint idea that her mom suffered), comes from 14 nations. Her
fully wired home is her castle. After half a dozen virtual meetings this morning, she'll take
a so-called RETRB (ReTRaining Break) and attend a virtual class in engineering
(conducted from God knows where) as part of her virtual/online master's degree program.

She is deeply committed to her self-designed, do-it-from-anywhere-with-anybody "career"
path. She is relieved, by the white-collar robots, of 95% of the drudge work ... and is adding
value by being on the tippy top of her intellectual game. Her only security is her personal
commitment to constant growth and her global (virtual) rep for great work.

"Get a grip, Peters," you retort. Is this "be wild and crazy and Webby and ceo of your own
life" picture anything other than New Age/new economy/Palo Alto-consultantspeak b.s.?

I think it is relevant and real rather than wild and crazyon at least two important scores.

One is that though my "house" is in Vermont, I've hung my professional shingle in Palo
Alto since 1981. All hell is breaking loose "out there/here." These folks may sound weird,
but they may also be redefining the world. And speaking as a 57-year-old, "they" don't look
or eat or taste or smellor workmuch like Frank Peters or George Babbitt or Dilbert.

Two is back to the future! I constantly remind my middle-aged seminar participants that
George Babbitt and Dilbert are not the quintessential Americans. Who are? Ben Franklin
(the father of self-help literature). Ralph Waldo Emerson (self-reliance was his shtick,
recall). Walt Whitman. And yes, motivational guru Tony Robbins. And yes, Donald
Trump. And ... Bentonville, Arkansas' Sam Walton ... and Bill Gates.

Hero Jim Clark, mentioned above, is no charmer, as revealed by Michael Lewis in The
New New Thing. In fact, to my reading, he comes off as about as delectable as Donald
Trump. But he's pure American bravado, a bravado that was lost in the Babbitt-Dilbert-Big
Bureaucracy-Cubicle Slave decades.

WHAT IF? Maybe the wild new-economy America is the old America. Truer to ourselves.
We came here to break free, to make our records in our awkward ways, as did my German
grandfather Jacob Ebert Peters. He arrived in the 1880s and was a wildly successful
Baltimore contractor 30 years later. Then he lost it all in the Great Depression. How
quintessentially American.

Like Grandpa, I am facing extinction, only by this new set of powerful forces. I make most
of my living giving live seminars and training programs and as a management consultant.
It's all gravitating to the Webgravitating, heck. It's moving at the speed of light. I am
scrambling to reinvent myself, to not just "cope" but to exploit the new communication and
connection media. Hey, there are young management gurus hot on my trail. Hot = Web
speed.

I'm completely fed up with Dilbert. He's funny. He's unerringly on the money. But he's a
hapless victim too. Damned if I'm going to be. In any event, it's going to be one hell of an
interesting ride.

What Will Be the 10 Hottest Jobs?
Looking for a career change? A decade ago, who would have guessed that Web designer
would be one of the hottest jobs of 2000? Here are some clues.

1 Tissue Engineers With man-made skin already on the market and artificial cartilage not
far behind, 25 years from now scientists expect to be pulling a pancreas out of a Petri dish.
Or trying, anyway. Researchers have successfully grown new intestines and bladders inside
animals' abdominal cavities, and work has begun on building liver, heart and kidney tissue.

2 Gene Programmers Digital genome maps will allow lab technicians to create customized
prescriptions, altering individual genes by rewriting lines of computer code. After scanning
your dna for defects, doctors will use gene therapy and "smart" molecules to prevent a
variety of diseases, including certain cancers.

3 Pharmers New-age Old MacDonalds will raise crops and livestock that have been
genetically engineered to produce therapeutic proteins. Works in progress include a
vaccine-carrying tomato and drug-laden milk from cows, sheep and goats.

4 Frankenfood Monitors Not sure what's for dinner? With a little genetic tinkering, fast-
growing fish and freeze-resistant fruits will help feed an overpopulated planet, but such
hybrids could unwittingly wipe out the food chain. Eco-scouts will be on the lookout for
so-called Trojan gene effects, and bounty hunters will help the usda eliminate transgenic
species that get out of hand.

5 Data Miners When Ask Jeeves just won't cut it, research gurus will be on hand to extract
useful tidbits from mountains of data, pinpointing behavior patterns for marketers and
epidemiologists alike.

6 Hot-line Handymen Still daunted by the thought of reprogramming your VCR, let alone
your newfangled DVD? Just wait until your 3-D holographic TV won't power up or your
talking toaster starts mouthing off. Remote diagnostics will take care of most of your home
electronics, but a few repairmen will still make house calls ... via video phone.

7 Virtual-reality Actors Pay-per-view will become pay-per-play, allowing these pros to
interact with you in cyberspace dramas. Scriptwriters will also be in high demand, as mouse
potatoes clamor for new story lines to escape from their droned-out existence.

8 Narrowcasters Today's broadcasting industry will become increasingly personalized,
working with advertisers to create content (read: product placement) just for you. Ambient
commercials will also hijack your attention by using tastes and smells, with the ultimate
goal of beaming buy-me messages directly into your brain.

9 Turing Testers Computer engineers will continue to measure their efforts to mimic human
intelligence, as British mathematician Alan Turing suggested 50 years ago, by asking you
whether you're talking to a person or a machine. By the time you can't tell the difference,
these human simulators will be used as unflappable customer service reps as well as
Internet attaches who can summarize your e-mails and even write back: "Hi, Mom, sorry I
missed your call ... "

10 Knowledge Engineers Artificial-intelligence brokers will translate your expertise into
software, then downsize you.

And what jobs will disappear?

1 Stockbrokers, Auto Dealers, Mail Carriers, Insurance and Real Estate Agents The Internet
will eradicate middlemen by the millions, with a hardy few remaining to service the
clueless. You'll cut us a deal, right, HAL?

2 Teachers Distance learning is becoming more popular, and through the miracle of online
classes and electronic grading, today's faculty lounge could become tomorrow's virtual help
desk. Though a complete conversion is unlikely, outsourcing our education system might
cost less than installing all those metal detectors.

3 Printers President Oprah may use her book club to rescue the printing press from
extinction when newspapers and magazines make the switch to digital paper. Xerox and
other visionaries are racing to produce a material that's as flexible as regular paper and as
versatile as a computer screen, with the end result keeping news junkies happy, not to
mention all those trees.

4 Stenographers Sophisticated voice-recognition software will replace court reporters and
lots of secretaries and executive assistants. Note to self: don't ditch the assistant just yet
technology may cover the grunt work, but who'll cover for you when that report isn't ready
or get blamed for the snafu? .

5 CEOs Top-down decision making will be too cumbersome, and golden parachutes too
obscene, for the blistering 24-hour business day. A global team of quick-thinking experts
will carry companies through the Internet age and beyond.

6 Orthodontists No more metal mouth, thanks to 3-D simulation programs that will crank
out a series of disposable, clear-plastic "aligners" to shift your teeth into position. Already
in clinical trials, this technology is geared for adults, so all you gap-toothed prepubes will
have something to look forward to.

7 Prison Guards Microscopic implants will restrain convicts from engaging in criminal
activity. The sensors will require lots of fine tuning, thoughwe wouldn't want an
aggressive telemarketer getting zapped, would we?

8 Truckers Interstates will have "smart" lanes enabling computer-driven vehicles to travel
bumper to bumper at high speeds. Suburbia will decongest by using bottleneck sensors in
cars to suggest alternate routes, and while you can kiss those meter maids goodbye, expect
tickets to appear on your virtual dash.

9 Housekeepers If fridges today can decide to buy you more milk (online), then self-
motivated vacuums don't sound so far afield. Perhaps self-cleaning homes will use a central
vacuuming system or dust-eating nanobots. Either way, you can bet your retirement
community there are people working on it.

10 Fathers Between in-vitro fertilization and cloning, dads could become dinosaurs. Moms,
too, with the possibility looming of an artificial womb. Did somebody say George Orwell?

What will replace the tech economy?
Get ready for bioeconomy, which will supplant our infotech economy. Bioec will give new
meaning to the smell of money
By Stan Davis and Christopher Meyer

We didn't realize we were no longer living in an industrial economy for about 20 years,
from the early 1950s to the early '70s. When we finally figured out the old economy had
exited, we didn't know what to call the new one. Postindustrial? Service? Shopping and
gathering? Information won the title. Get ready for dj vu all over again. Like everything
else, all economies have beginnings and endings, and we can already see the end of this one
a few decades hence. Economies end not because they peter out but because a challenger
supplants them. That's what will happen around a quarter-century from now.

Hunting-and-gathering economies ruled for hundreds of thousands of years before they
were overshadowed by agrarian economies, which ruled for about 10,000 years. Next came
the industrial ones. The first began in Britain in the 1760s, and the first to finish started
unwinding in the U.S. in the early 1950s. We're halfway through the information economy,
and from start to finish, it will last 75 to 80 years, ending in the late 2020s. Then get ready
for the next one: the bioeconomy.

Life cycles for people and plants, for businesses, industries, economies and entire
civilizations have four distinct quarters: gestation, growth, maturity and decline. The
Internet is the main event of the information economy's mature quarter, the last phase of it
being marked by the widespread use of cheap chips and wireless technology that will let
everything connect to everything else. Life cycles overlap. So the information economy
will mature in the years ahead as the bioeconomy completes its gestation and finally takes
off into its growth quarter during the 2020s.

The bioeconomy opened for business in 1953, when Francis Crick and James Watson
identified the double-helix structure of dna. The bioeconomy has been in its first quarter
ever since, and completion and publication of the decoded human genome marks the end of
this gestation period.

We're heading into the second, or growth, quarter, when hot new industries appear, much as
semiconductors and software did in the second quarter of the info economy. Thus biotech
will pave the way for the bioec era. During the next two decades, organic biotech will
overlap with inorganic silicon infotech and inorganic composite materials and
nanotechnologies.

During the overlap of infotech and biotech, we will be digitizing many biological
processes. Up until now, four kinds of information dominate: numbers, words, sounds and
images. But information comes in many other forms, such as smell, taste, touch,
imagination and intuition. The problem is that our technologies for smell, taste and other
new information forms aren't yet developed enough to make them commercially viable. By
the 2020s, they will be.

Smell, for example, perhaps the most primal of senses, is being digitized the way sight and
sound have been. The basics of what makes a smell can be captured molecularly and
expressed digitally on a chip at a reasonable price. Companies like DigiScents of Oakland,
Calif., and Ambryx of La Jolla, Calif., have already developed digital odors. Cyrano
Sciences of Pasadena, Calif., is developing medical-diagnostics technology that can "smell"
diseases.

Imagine sending a greeting card that incorporates the smell of flowers with a written and
graphic message. By the 2020s, digital movies will have their own distinctive smell prints.
(You can watch Haley Joel Osment in a remake of The Beach and smell the coconut oil!)
Why stop there? How does a bank smell, and how does Chase smell different from
Citigroup? How about retailers? This is only a tiny example of what will come.

More fundamentally, the first four industries to be infused by the bioec era will be
pharmaceuticals, health care, agriculture and food. Best known are the dozens of
bioengineered drugs already on the market. Most of these save lives by treating existing
problems. One of the biggest shifts for biotech in the decades to come, then, will be the way
it transforms the health-care paradigm from treatment to prediction and prevention. Health
care today is really sick care. The sick-care business model made money by filling hospital
beds. Currently, we're in the managed-care model. It is transitional, lasting one to two
decades. Here, you make money by emptying beds. In the bioeconomy, health care will
work on a preventive model, making money by helping people avoid having to enter a
hospital in the first place.

Basic needs are met in every economy by using the latest technologies available. In the
bioec of the 2020s, the farm will be a super-bioengineered place with multimillion-dollar
manufacturing plants instead of fields.

Today bioengineered milk, meat and produce are already on our supermarket shelves.
Numerous varieties of corn are biogenetically alteredalbeit not without challenge. One
study showed that pollen from some strains of altered corn killed the larva of the monarch
butterfly. Fears of Frankenfoods have caused enough of a furor to disrupt Monsanto's life-
sciences strategy and help topple its CEO. Such incidents will certainly multiply.

Beyond 2025, when we move into the mature bioec, the effects and applications of biotech
will spread into sectors seemingly unrelated to biology. In the 1950s and '60s it was
difficult to comprehend that computers would change every industryfrom manufacturing
to hotels to insurancejust as it is now tough to see how biotech will alter nonbiological
businesses. By the third quarter of the next economy, somewhere in the mid-century, bio
applications will seep into many of the nooks and crannies of our nonbiological lives.

Problems will spread as much as benefits do. Each era produces its own dark side. The
industrial era was accompanied by pollution and environmental degradation. The major
problem of the information age is privacy. In the bioeconomy, the issue will be ethics.
Cloning, bioengineered foods, eugenics, genetic patenting and certainty about inherited
diseases are just a few of the many developments that are already creating a storm. And the
storm will intensify in the U.S.

All this will make baby boomers a unique generation. They will be the first in history to
span three distinct economies. Born at the end of the industrial period, they will spend their
entire careers in the information age and will end their days watching their grandchildren
negotiate the bioeconomy.

Generation Xers, born after 1964, will be different. During their working years, they will
experience two major economic shifts: first, from the crunching to the connecting halves of
this information economy and, second, from a microwave-based connected universe to the
cell-based world of biologic and bionomics. Those of you in Gen Y may have to go through
three!

However long you will spend in it, the bioeconomy is the next one to be born, and, of all
economies past, present and future, it will exert an impact that will make the infoeconomy
look like the runt of the litter.

Will There Be Any Hope for the Poor?
Poverty isn't defined merely by GDP. It has political and educational causes, and
multidimensional remedies
by Amartya Sen

Progress is more plausibly judged by the reduction of deprivation than by the further
enrichment of the opulent. We cannot really have an adequate understanding of the future
without some view about how well the lives of the poor can be expected to go. Is there,
then, hope for the poor? To answer this question, we need an understanding of who should
count as poor. Some types of poverty are easy enough to identify. There is no way of
escaping immediate diagnosis when faced with what King Lear called "loop'd and
window'd raggedness."

But as Lear also well knew, deprivation can take many different forms. Economic poverty
is not the only kind of poverty that impoverishes human lives.

In identifying the poor, we must take note, for example, of the deprivation of citizens of
authoritarian regimes, from Sudan to North Korea, who are denied political liberty and civil
rights. And we must try to understand the predicament of subjugated homemakers in male-
dominated societies, common in Asia and Africa, who lead a life of unquestioning docility;
of the illiterate children who are offered no opportunity of schooling; of minority groups
who have to keep their voices muffled for fear of the tyranny of the majority; and of
dissidents who are imprisoned and sometimes tortured by the guardians of "law and order."

Those who like to keep issues straight and narrow tend to resist broadening the definition of
poverty. Why not just look at incomes and ask a question like "How many people live on
less than, say, $1 or $2 a day?" This narrow analysis then takes the uncomplicated form of
predicting trends and counting the poor. It is a cheap way of telling "the future of the poor."
But human lives can be impoverished in many different ways. Politically unfree citizens
whether rich or poorare deprived of a basic constituent of good living. The same applies
to such social deprivations as illiteracy, lack of health care, unequal attention to the
elementary interests of women and of young girls and so on.

Nor can we ignore the linkages between economic, political and social deprivations.
Advocates of authoritarianism ask a misleading question, "Is political freedom conducive to
development?," overlooking the fact that political freedom itself is part of development. In
answer to the wrongly asked question, they respond with a wrongly given answer: "Growth
rates of gdp are higher in nondemocratic countries than in democratic ones." There is no
confirmation of this oft-repeated belief in extensive empirical studies. Sure, South Korea
might have grown fast enough before the re-establishment of democracy, but not so the less
democratic North Korea. And democratic Botswana certainly grew much faster than
authoritarian Ethiopia or Ghana.

Furthermore, the growth of gdp is not the only economic issue of importance. Reducing
political deprivation can indeed help diminish economic vulnerability. There is, for
example, considerable evidence that democracy as well as political and civil rights can help
generate economic security, by giving voice to the deprived and the vulnerable. The fact
that famines occur only under authoritarian rule and military dominance, and that no major
famine has ever occurred in an open, democratic country (even when the country is very
poor), merely illustrates the most elementary aspect of the protective power of political
liberty. Though Indian democracy has many imperfections, the political incentives
generated by it have nevertheless been adequate to eliminate major famines right from the
time of independence in 1947 (the last famine was four years before that, in 1943, which I
witnessed as a child).

In contrast, China, which did much better than India in several respects, such as the spread
of basic education and health care, had the largest famine in recorded history in 1959-62,
with a death toll that has been estimated at 30 million. Right now, the three countries with
continuing famines are also in the grip of authoritarian and military rule: North Korea,
Ethiopia and Sudan.

In fact, the protective power of democracy in providing security is much more extensive
than famine prevention. The poor in booming South Korea or Indonesia may not have
given much thought to democracy when the economic fortunes of all seemed to go up and
up together. But when the economic crises came (and divided they fell), political and civil
rights were desperately missed by those whose economic means and lives were unusually
battered. Democracy has become a central issue in these countries now: in South Korea,
Indonesia, Thailand and elsewhere.

Democracy, which is valuable in its own right, may not be especially effective
economically all the time, but it comes into its own when a crisis threatens and the
economically dispossessed need the voice that democracy gives them. Among the lessons
of the Asian economic crisis is the importance of social safety nets, democratic rights and
political voice. Political deprivation can reinforce economic destitution.

To look at a different type of interconnection, there is plenty of evidence from the positive
experience of East and Southeast Asia that the removal of social deprivation can be very
influential in stimulating economic growth and sharing the fruits of growth more evenly. If
India went wrong, the fault lay not only in the suppression of market opportunities but also
in the lack of attention to social poverty (for example, in the form of widespread illiteracy).
India has reaped as it has sown by cultivating higher education (its booming software
industry is only one effect of that), but the country has paid dearly for leaving nearly half
the people illiterate. Social poverty has helped perpetuate economic poverty as well.

If I am hopeful about the future, it is because I see the increasingly vocal demand for
democracy in the world and the growing understanding of the need for social justice.
Democracy is recovering some of its lost ground in Asia, Latin America and even Africa.
Gender equity and basic education are beginning to receive more attention in India,
Bangladesh and elsewhere. I am not unconditionally hopeful, but certainly conditionally so.
We must, however, take a sufficiently broad view of poverty to make sure the poor have
reason for hope.

Will Everyone Have the Bomb?
It's still tough to deliver a nuclear punch, which is why we really need to worry about other
weapons in the arsenal of destruction
By Gideon Rose

When israel was working on a secret nuclear program in the 1960s, satirist Tom Lehrer
captured the rationale succinctly: "'The Lord's our shepherd,' says the psalm, but just in
case, we better get a bomb." Israel was following the U.S., Britain, the Soviet Union,
France and China. India joined the club too, followed eventually by Pakistan. And today
North Korea, Iran, Iraq and Libya are knocking at the door.

Some see these moves as steps toward a world full of nukes. But in retrospect what is
striking is less the number of countries that have gone nuclear than the number that have
not. And several, including South Africa, Brazil and Argentina, have even reversed course
and abandoned once active weapons programs.

Will such restraint continue for long in the anarchic, postcold war era? American officials
hope so, but they aren't betting on it. Now as before, when it comes to proliferation, the
U.S. tries to have it both ways. It supports a wide range of arms-control efforts designed to
stop the spread of nuclear weapons abroad while hanging on tightly to a nuclear capability
of its own and sometimes even brandishing it to make a point. This attracts criticism from
others, but hypocrisy is par for the course in international relations. And besides, American
officials are correct to argue that in this case consistency"Hey, everyone should have as
many bombs as we do!"would not be a virtue.

Looking ahead, old-style proliferationthe acquisition of nuclear weapons by non-nuclear
stateswill hardly be the worst problem. Most countries have little interest in getting the
bomb, either because they are not gravely threatened (such as Costa Rica) or because their
safety is guaranteed by another nuclear power such as the U.S. (like Germany and Japan).
The few countries that are interested, meanwhile, can be divided into what have been called
the orphans and the rogues.

Orphans such as Israel, India and Pakistan live in dangerous neighborhoods and have
legitimate security concerns. They will probably behave just as the original nuclear powers
did, which is to say they will use the weapons primarily for deterrence. The ultimate effect
of their joining the club should be to extend the cold war's great power stabilityand
harrowing crisesto a few regional hot spots. The chief problem with the orphans is
getting them to understand the importance of proper safety measures, secure command and
control procedures, and other cold war lessons. Rogues such as Iraq, North Korea and
Libya are much more dangerous because they might be reckless or desperate enough to
threaten or use their capabilities for offensive and not merely defensive purposes. Keeping
weapons of mass destruction out of their hands is a critical challenge, which will have to be
met by constant bullyingand occasional bribing, along with better control over the
materials and expertise from the former Soviet nuclear program. Just as critical will be
maintaining a strong and credible nuclear and conventional deterrent so that even if rogues
should manage to get terrible weapons, they will think long and hard before using them.

The really difficult problem will be a new kind of proliferation involving the acquisition of
chemical, biological and cyberweapons by subnational actors such as terrorist groups, cults
or angry individuals. These weapons are easy to make, hard to track and hard to defend
against. This means that even if the U.S. does spend tens of billions of dollars on a system
to shoot down North Korean missiles, we will still have to deal with the equally pressing
problem of stopping doomsday cults or future Unabombers armed with deadly viruses. Just
as internecine conflicts have risen while interstate wars have declined, so subnational terror
may increase as traditional nuclear threats wane. Monitoring and deterring states are hardly
easy, but they pale in difficulty compared with the task of monitoring and deterring obscure
and fleeting groups and individuals. Not everyone in our future will have a bomb, of
course. If countries really want to, though, a lot of them will still be able to lay their hands
on the stuff of nightmares.

Gideon Rose is Olin Senior Fellow for National Security Studies at the Council on Foreign
Relations and senior editor of Foreign Affairs

Will China Be Number One?
Five hundred years ago, China dominated Asia as the world's most advanced superpower.
Now, with a growing economy and a young population, the nation is again rising to
superpower statuswith the U.S. as nemesis
By Paul Bracken

To understand China's role in the world come 2025, it helps to look at the rise of Prussia in
the 1870s. It is nearly impossible to imagine a world not led by Western institutions and not
dominated by Western values. But China's growing power signifies just such a transition, in
much the same way Prussia stripped Britain and France of their leading roles in Europe.
The rise of Beijing does not automatically translate into a decline for Washington, but it is a
challenge to America's superpower position.

China changes the meaning of being a superpower. At present a superpower is a country
with a big economy and global military reach. The word was popularized in the 1960s to fit
the U.S., the only holder of the title today. China will have neither the material wealth of
the U.S., nor its global military reach. But looking at the rise of China through the narrow
framework of numbers of automobiles or Osprey helicopters doesn't come to grips with the
country's sources of power. Defining a superpower in terms of economic and military size
leaves out the power that comes from being able to upset the system, even when done
unintentionally. A new definition of superpower must take account of who can upend an
order that has lasted for centuries. On this score, China is a superpower.

With its vast scale, China can upset the global trading rules and security understandings
that underwrite the current Western-led system. China's problems with economy, energy
and the environment will be the world's problems, because if they are not taken care of,
terrible consequences will spill across the map far from China. Perversely, the gap in
material wealth and military technology separating China from the West is actually a
source of leverage for Beijing. In the West the gap makes China look weak. But the gap
actually makes China strong. China has the advantagethe underdog's advantagethat
comes from knowing that shutting the country out, trying to hold it down, provokes the
resentments the West wants to avoid: resentments that might make Chinese markets hostile
to U.S. exporters. When a U.S. bomber accidentally struck China's embassy in Belgrade
during the Kosovo war, protests were heard not just in Beijing but also in other cities
around the world. America's image as an overbearing superpower is something the Chinese
are only too willing to exploit.

People forget that 500 years ago, China was the world's sole superpower. When many
Europeans were living in mud huts and scratching the soil with sticks, China was the
greatest economic and military power on earth. A hundred years before Europe began its
mastery of Asia and America, China had the biggest and best navy in the world. But for an
accident of history, Europe would be speaking Chinese today. China discovered the
inventions that would pave the way to world mastery for those who put them to use: the
printing press, gunpowder and the magnetic compass. Given this economic, military and
technical head start, what happened? Europe, not China, became the world's colonizer and
mapmaker. Why did China do so badly in the modern era?

The question is called the Needham paradox, after Joseph Needham, the great British
scholar who raised it in his multivolume history of Chinese technology. Needham's answer
sheds light on China's ultimate condition, allowing us to sort through the buzz of short-term
problems that distract attention from the fundamental change now taking place. Yes, the
West mastered the technology that China first discovered. Yet much more important,
according to Needham, was that China lost its edge in the 15th century by suppressing
entrepreneurs whose power posed a threat to the Emperor. The empire was made safe from
within, but at the price of atrophy in the face of the Western challenge. This condition
continued under Mao Zedong.

China's current economic growth is brought about by decentralized business forces, not by
the state. The conditions of the Needham paradox are ending. As they do, there is reason to
believe Chinese capitalism may be especially dynamic. Even though modern capitalism is a
Western invention, Westerners are not the only ones who can master it. For cultural
reasons, capitalism is in many ways more natural to Asia than it ever was to the West. The
devotion to one's task as the hallmark of productive work had to be beaten into a mostly
balky peasantry in Europe, even while such dedication flourished under Confucianism.
Today the work ethic in China puts the U.S. to shame. Imagine what will happen when
technology and innovation join with what some U.S. experts in the 1960s contemptuously
called "ant labor" in China.

The West's belief is that globalization will fundamentally change China and minimize the
problems of the country's inclusion in the world system. This is the American myth of the
1990s. We Americans, for example, see a rigid trade-off between economic success in the
age of globalization and military power. We think other countries can have one or the other
but not both. This simplistic view, however, forgets even our own history, as well as the
history of other rising states. The rise to superpowerdom in the U.S. in the 1940s and 1950s
was accompanied by sharp growth in economic and military power. Each fueled the other.
We forget this because as a status quo power we want to freeze a global order that benefits
us. We'd much rather hold to the belief that new powers will be only too happy to continue
to be led by a Western club whose historical record of treating non-Westerners has left a
grudge we'd also rather forget.

China has no intention of forgetting the past. Indeed, its own complicity in its downfall as a
great power makes it useful to blame others. China faced famine repeatedly not because of
its enormous population but because of the actions of its leaders. Turning this painful
history into a positive asset by stoking smoldering resentments and nationalism can be an
effective way for China to get more from negotiations with Washington than it otherwise
would. It can also hold the country together as the Communist Party crumbles.

Another challenge to America is a military one. Beijing has started a significant expansion
of those parts of its military that will undercut the American presence in Asia unless we
spend large sums to counter it. China is turning out hundreds of missiles that threaten the
American bases on which U.S. military power is founded. With these bases under threat,
our military capacity to be a player in Asia drops sharply. The U.S. may have a "global"
military that can't play in the most important part of the world.

The U.S. is a rich and innovative country. China is unlikely to displace the U.S. by 2025.
But by changing what it means to be a superpower, Beijing undermines the one country that
now holds the title. China can upset the global order without deliberately trying to do so
and can check U.S. influence in parts of Asia where we have been for 50 years. One thing
can be said for sure: the costs of staying a superpower for the U.S. are about to go up,
sharply.

Paul Bracken is a professor at the Yale School of Management and author of Fire in the
East: The Rise of Asian Military Power and the Second Nuclear Age

What Will Peace Mean For the Middle East?
Not tranquillity. Oh, sure, that final white house signing ceremony that eventually ends
hostilities will open the way, unevenly and begrudgingly at first, for tourism across the
Golan Heights, quiet along the Galilee, coexistence in Jerusalem and joint ventures in
Jordan and the Gaza Strip. But a Middle East peace deal doesn't mean peace in the Middle
East.
By Robin Wright

The conflict's conclusion will be more memorable for the turmoil that follows. A couple
decades down the road, you may even find yourself musing, "Oh, for the good old days of
straightforward conflicts between Arabs and Israelis. Like the cold warhow simple it was
back then."

The comparison is appropriate, for the aftermath of the Arab-Israeli dispute will mirror the
aftermath of the superpower rivalry, writ small. Once again the dangers will shift from big
bloody wars between states or their surrogates to a bunch of smaller but messier and more
persistent conflicts within countries. But don't be fooled. The stakes will be just as high
over the next quarter-century as they were in the last, for the new disorder won't dissipate
until the political map of the Middle East has been redrawn.

Here's how peace will spark upheaval: Middle Eastern governments will no longer be able
to justify huge military expenditures (often made to secure their own rule) or defer public
demands for more freedoms and better living conditions by invoking border defense,
territorial mandates, nationalism or cultural honor. So they'll spend the next quarter-century
confrontingor being confronted bythe forces already changing the rest of the world.

No capital will be exempt, from the world's oldest in Damascus to its newest in Palestine,
from dusty Riyadh to scenic Rabat, from war-weary Beirut and Baghdad to sleepy Muscat
and Manama, from landlocked Amman to seafront Algiers. Oh, and Jerusalem too. Syria,
Libya and Iraq will witness the deepest transformations for the simple reason that their
eccentric ideologies are the most bankruptand the most out of synch with their people.
Their institutions are corrupt. And their economies are moribund.

Countries like Egypt and Algeria in the middle of the political spectrum are most
vulnerable short term. Both took tepid steps toward democracy in open, multiparty
elections in the 1980s, then marched backward in the 1990s. Both began the 21st century
facing unprecedented social pressures from soaring populations they can't feed, educate,
employ or house. Egypt, home to 67 million people (almost half the entire Arab world),
produces an additional 1 million mouths to feed every eight months. In both countries,
leaders have stalled on reforms. The result has been a decade of violence. More is to come.
Despite Western allies, the world's last bloc of true monarchiesSaudi Arabia, Kuwait, the
little gulf sheikdoms, Morocco and Jordanisn't off the hook either. Monarchy went out of
political fashion in the 20th century. Most Arab dynasts have held on thanks to oil, isolation
or tribal and family loyalties. But petrodollars also educated a generation now eager to
connect with a globalizing world.

Then toss in technology. Courtesy of the borderless Internet, parties banned by Iraq, Saudi
Arabia, Libya and Tunisia have websites and use e-mail for secret communications.
Satellite receivers, nicknamed "couscous dishes" because they've become a household
necessity, have changed Morocco's skylineand access to ideas in the outside world. Al-
Jazeera, a television station based in Qatar that is available throughout the Arab world, airs
newsreal newsand political debates as heated as anything on CNN's Crossfire.

But the next quarter-century will be so unsettling because it will require change more
profound than anything witnessed during the overhaul of Eastern Europe's regimes or Latin
America's military dictatorships. The Arabs will have to find a way to cross the threshold
between tradition and modernity; they will have to find a formula that is true to both
historic religious cultures and 21st century pluralism. In the West, it was the Christian
Reformation that opened the way for the Age of Enlightenment and the introduction of
modern liberal democracy based on individual rights. In the Middle East, it would amount
to nothing less than an Islamic Reformation. And it's already well under way.

Islam will often be the idiom of political change because so many regimes have for so long
excluded secular opposition. It's the most widespread alternative that provides a legal forum
and a legitimate format, since Islam is the only major monotheistic religion that offers a set
of specific rules to govern society as well as a set of spiritual beliefs. That doesn't mean
more Iranian-style revolutions; Arabs are all too aware of the costs and repercussions of the
Persians' revolt against 2,500 years of dynastic rule. But they're taking note of ideas put
forward by Iran's energetic reformers and philosophers. Among the daring arguments: to be
a true believer, one must come to the faith freely. Thus freedom precedes faitha quantum
leap for a religion whose name literally means "submission." The process will be divisive,
for it will demand answers to existential questions of identity, belief and even the role of
historic experience. In the end, Islam is more likely to be a vehicle for the transition, not
necessarily the finished product. At the same time, don't expect the emergence of a string of
liberal, Western-style democracies.

Israel will undergo its own confrontation over faith and identity. Politically, the country's
founding ideology in the 20th century was Zionism, a predominantly socialist, secular and
rather utilitarian ideology. In 21st century Israel, the two ends of the spectrum pit
democrats, who support intense capitalism and a hedonistic Americanized culture, against
fundamentalists, who cling to creating Greater Israel, as outlined in Abraham's covenant
with God, and who want to impose strict religious regulations on everyday life.

Ethnically, the country was launched by European Ashkenazi Jews who looked to the West
for inspiration. Because of demographics and immigration, Israeli society over the next
quarter-century will increasingly be Sephardic Jews from the Eastand Israeli Arabs
whose families didn't flee in 1948. Without the imperative of security, Israel and the issue
of its mission will get wrapped up in robust debates.

For the first decade of this new century, the Old Guard throughout the region will resist
change. But by the time it's over, most of the big namesSyria's Hafez Assad, Egypt's
Hosni Mubarak, Libya's Muammar Gaddafi, Saudi Arabia's King Fahdwill be gone. Who
dares to speculate, however, about Saddam Hussein.

The leadership shift began in the late 1990s with the emergence of a new generation. Jordan
is ruled by young King Abdullah, who watches Dharma & Greg and runs around in
disguise to check out his government's performance. Syrian heir apparent Bashar Assad
plays Faith Hill on a Walkman and, as chairman of the Syrian Computer Society, is
bringing the information age to a controlled society, made so by his father. Qatar's Sheik
Hamad bin Khalifa al-Thani ousted his father, opened up a cloistered society and then gave
malesand femalesthe vote.

Each of them breaks the mold, but they are merely transition figures. A real Middle East
peace will begin only with the emergence from the educated middle class of ordinary
people who are allowed to vie freely for, win and be defeated in elective officeand thus
end the pattern of leaders and lite courtiers in power for life.

Will You Become Your Own Nation?
Scots, Croats, Chechenseverybody seems to want a country of his own. In the future, our
loyalties may be determined by what we believe rather than where we live
By Samuel P. Huntington

The nation-state is a rare and recent phenomenon in humanaffairs. Nation-states emerged in
the West with the invention ofthe printing press and the proliferation of publications
invernacular languages in the 16th and 17th centuries. Slowlypeople in Western Europe
acquired the rudiments of nationalidentity, defined at first largely in religious terms.

In the 19th century, national consciousness spread throughoutEuropean societies. In the
20th century, Third World students ofWestern nationalism returned home to lead national-
liberationmovements. Meanwhile, the concept of the nationan ethnic orcultural
communityhad become linked to that of the stateapurely political organization. No
reason exists in logic orexperience, however, why sources of identity and authority
shouldcoincide, and through most of human history they have not.

But while the nation-state has been the pre-eminent institutionof the modern world for
several centuries, it is now seen to bein a condition of decay. Throughout the world, people
arereconsidering what they have in common and what distinguishesthem from others.
Modernization, economic development,urbanization and globalization have led people to
shrink theiridentity. People now identify with those who are most like them,those with
whom they share a common language, religion, traditionand history. Today Scots,
Kosovars, Catalonians, Chechens andothers are all affirming their identity and seeking a
politicalvoice.

In the 19th and 20th centuries, nationalism was promoted bylites who developed
sophisticated appeals to generate a sense ofnational identity among those whom they saw as
their compatriotsand to rally them for nationalist causes. Now, however, theemergence of a
global economy, plus the arrival of transnationalcoalitions (on issues such as women's
rights or the environment),has led many elites to develop a more cosmopolitan identity.
Yetthe average citizen in most countries remains stronglynationalistic and often strongly
opposes lite views.

Waking up to these developments means a number of things. First,it suggests the need to
question the linkage of identity andauthority implied by nation-states. No reason exists
whyinaddition to statesnationalities, diasporas, religiouscommunities and other groups
should not be treated as legitimateactors in global affairs.

At the same time, it's worth recognizing that the efforts of theU.S. government and others
to get people to live in multinationaland multiethnic communities are more often than not
exercises infutility. Instead, it is often wise to accommodate those pushingfor ethnic
separation, segregation and homogenizationeven ifthat means partitioning entire nations
to reduce violence.

Global politics is growing more complex. States will remain theprincipal actors in global
politics. But they are being joined bymany other actors, including failed states such as
Sierra Leone,suprastate organizations like the European Union, interstateorganizations like
the International Monetary Fund and ingos(international nongovernment organizations)
such as Greenpeace.

Global politics is, in a sense, coming to have the pluralism anddiversity typical of politics
in democratic countrieswith onecrucial difference. Democratic societies recognize and
accept thepeople as the ultimate source of sovereignty and some governmentinstitutions,
usually the legislature and courts, as the ultimatesources of authority. In the emerging
global politics, however,state sovereignty and authority are withering, and noalternative,
such as some system of world government, is about tofill the vacuum. The result is almost
certain to be chaos. Thebasic issue for the next quarter-century is whether statesmenwill
have the patience and wisdom to manage this chaos inpeaceful rather than violent fashion.

Are We Coming Apart Or Together?
If you like things that are new and different, our globalizing world is a dream. Plenty of
folks, though, want things to stay the same
By Pico Iyer

It is a truth all but universally acknowledged that the moreinternationalism there is in the
world, the more nationalismthere will be: the more multinational companies,
multiculturalbeings and planetary networks are crossing and transcendingborders, the more
other forces will, as if in response, fashionnew divisions and aggravate old ones. Human
nature abhors avacuum, and it is only natural, when people find themselves in adesert,
without boundaries, that they will try to assuage theirvulnerability by settling into a
community. Thus fewer and fewerwars take place these days across borders, and more and
moretake place within them.

Many Americans, rejoicing in an unprecedented period of economicsuccess and celebrating
the new horizons opened up by our latesttechnologies, are likely to embrace the future as a
dashing (ifunknown) stranger who's appeared at our door to whisk us into astrange new
world. Those who travel, though, are more likely tosee rising tribalism, widening divisions
and all the fissuresthat propel ever more of the world into what looks like anarchy.Fully
97% of the population growth that will bring our numbers upto 9 billion by the year 2050
will take place in developingcountries, where conditions are scarcely better than they were
ahundred years ago. In many cases, in fact, history seems to bemoving backward (in
modern Zimbabwe, to take but one example, theaverage life expectancy has dwindled from
70 to 38 in recentyears because of AIDS). To travel today is to see a planet thatlooks more
and more like a too typical downtown on a globalscale: a small huddle of shiny high-rises
reaching toward amultinational heaven, surrounded on every side by a wasteland ofthe
poor, living in a state of almost biblical desperation.

When people speak of a "digital divide," they are, in effect,putting into 21st century
technological terms what is an age-oldcultural problem: that all the globalism in the world
does noterase (and may in fact intensify) the differences between us.Corporate bodies stress
connectedness, borderless economies, allthe wired communities that make up our
worldwide webs; those inChechnya, Kosovo or Rwanda remind us of much older forces.
Andeven as America exports its dotcom optimism around the world,many other countries
export their primal animosities to America.Get in a cab near the Capitol, say, or the World
Trade Centerand ask the wrong question, and you are likely to hear a tiradeagainst the
Amhara or the Tigreans, Indians or Pakistanis. Ifall the world's a global village, that means
that the ancestraldivisions of every place can play out in every other. And thevery use of
that comforting word village tends to distract usfrom the fact that much of the world is
coming to resemble aglobal city (with all the gang warfare, fragmentation andgeneralized
estrangement that those centers of affluencepromote). When the past century began, 13% of
humans lived incities; by the time it ended, roughly 50% did.

The hope, in the face of these counterclockwise movements, isthat we can be bound by
what unites us, which we have ever moreoccasion to see; that the stirring visions of Thomas
Paine orMartin Luther King Jr. have more resonance than ever, now that anAmerican can
meet a Chinese counterpartin Shanghai or SanFrancisco (or many places in between)
and see how much they havein common. What Emerson called the Over-soul reminds us
that weare joined not only by our habits and our urges and our fears butalso by our dreams
and that best part of us that intuits anidentity larger than you or I. Look up, wherever you
are, and youcan see what we have in common; look downor insideand you cansee
something universal. It is only when you look around that younote divisions.


The fresher and more particular hope of the moment is that asmore and more of us cross
borders, we can step out of, andbeyond, the old categories. Every time a Palestinian man,
say,marries a Singhalese woman (and such unions are growing morecommon by the day)
and produces a half-Palestinian,half-Singhalese child (living in Paris or London, no doubt),
anIsraeli or a Tamil is deprived of a tribal enemy. Even thePalestinian or Singhalese
grandparents may be eased out oflongtime prejudices. Mongrelismthe human equivalent
of WorldMusic and "fusion culture"is the brightest child offragmentation.

Yet the danger we face is that of celebrating too soon a globalunity that only covers much
deeper divisions. Much of the worldis linked, more than ever before, by common surfaces:
people onevery continent may be watching Michael Jordan advertising Nikeshoes on CNN.
But beneath the surface, inevitably, traditionaldifferences remain. George Bernard Shaw
declared generations agothat England and America were two countries divided by a
commonlanguage. Now the world often resembles 200 countries divided bya common
frame of cultural reference. The number of countries onthe planet, in the 20th century, has
more than tripled.

Beyond that, multinationals and machines tell us that we're allplugged into the same global
circuit, without considering verymuch what takes place off-screen. China and India, to cite
thetwo giants that comprise 1 in every 3 of the world's people,have recently begun to
embrace the opportunities of the globalmarketplace and the conveniences of e-reality (and,
of course,it is often engineers of Chinese and Indian origin who have madethese new
wonders possible). Yet for all that connectedness onan individual level, the Chinese
government remains as reluctantas ever to play by the rules of the rest of the world,
andIndian leaders make nuclear gestures as if Dr. Strangelove hadjust landed in Delhi. And
as some of us are able to fly acrosscontinents for business or pleasure, others are propelled
out oftheir homelands by poverty and necessity and war, in recordnumbers: the number of
refugees in the world has gone up 1,000%since 1970.

It seems a safe bet, as we move toward the year 2025, thatgovernments will become no
more idealistic than they have everbeenthey will always represent a community of
interests. Andcorporations cannot afford to stress conscience or sacrificebefore profit. It
therefore falls to the individual, on her owninitiative, to look beyond the divisions of her
parents' timeand find a common ground with strangers to apply the all-purposeadjective
"global" to "identity" and "loyalty." Never before inhistory have so many people, whether
in Manhattan or in Tuva,been surrounded by so much that is alien (in customs,
languagesand neighborhoods). How we orient ourselves in the midst of allthis foreignness
and in the absence of the old certainties willdetermine how much our nations are disunited
and how much we arebound by what Augustine called "things loved in common."


IV. Salud y Medio Ambiente
1. Qu tan ecolgico lucir el mundo en el futuro?
Sumamente descontaminado, los esfuerzos sern muy importantes para dejar de consumir
insumos contaminantes.

2. Qu tanto se calentar el mundo?
Mucho, los polos se deshielarn y las temperaturas aumentarn, pero los seres humanos se
adaptarn a las nuevas condiciones.

3. Habr mejores drogas?
Sin duda, y ser el mismo control interno del organismo quien diagnosticar y programar
su consumo

4. Qu pasar con la medicina alternativa?
Desaparecern todos los remedios homeopticos y similares a favor de medicamentos ms
certeros y a la medida del paciente

5. Volver a caminar Christopher Reeve?
Por el momento NO, pero la reconstruccin de la espina dorsal se est logrando a pasos
muy rpidos y el Sr. Superman podr volver a caminar.

6. Podremos desarrollar un nuevo cerebro?
NO, pero se podrn hacer trasplantes parciales, lo mejor parece que ser ser receptor y no
donador.


7. Quedar algn ser humano salvaje?
NO, dentro de 50 aos todo el mundo estar civilizado

8. Seguiremos comiendo carne?
El consumo de carne caer precipitadamente en los prximos aos, conforme se registren
cada vez ms muertes por excesos de grasa y colesterol.

9. Podremos reemplazar partes del cuerpo?
Por supuesto que se podr, ojos, orejas, rganos, etc., nos permitirn desarrollar un nuevo
look al estilo Hollywood.

10. Cules sern las nuevas amenazas de muerte?
Aparecern nuevas bacterias y virus que azotarn a la humanidad, cobrando un gran
nmero de vidas en tanto los cientficos desarrollan las curas

11. Podremos hacer que desaparezca la basura?
SI, en la medida que dejemos de producirla. En el futuro nuevas tecnologas permitirn el
re-uso de materiales que ayudarn a controlar la generacin de basura

12. Cul ser la pesca del da?
Si continuamos pescando al ritmo que lo hacemos todos los das, dentro de 25 aos estarn
vacos de peces. Afortunadamente en los ltimos aos se ha restringido la pesca por
periodos y florece una nueva actividad que es la acuacultura, la que promete el
mantenimiento de las especies marinas.

13. Seremos cada da ms gordos?
Buenas noticias para los gorditos, nuevos tratamientos y manipulaciones metablicas estn
siendo probadas ya con los ratones, seguramente no tardarn en llevarse las pruebas a los
humanos.

14. Viviremos hasta los 125 aos?
Puede ser que SI y puede ser que NO. Pero si es un hecho que la calidad de vida mejorar y
eso apoyar la longevidad, ahora solo necesitaremos que los cientficos desarrollen mejores
medicamentos para mantener los rganos funcionando en condiciones excelentes 125 aos
o ms.

15. Seguiremos necesitando de las relaciones sexuales?
Necesitarlo tal vez NO, desearlo usted qu cree... Lo que si es una realidad es que las
relaciones sexuales estarn ms orientadas al placer que a la procreacin

16. Encontraremos la cura del Cncer?
Parece que NO, pero los cientficos desarrollarn mejores tratamientos que ofrecern a los
enfermos una vida ms extensa y menos dolorosa

17. Nos quedaremos sin Gasolina?
Todo parece indicar que los seres humanos tendrn que desarrollar nuevos modelos de vida
frente a la inevitable escasez de hidrocarburos

18. Tendremos robots para las actividades del hogar?
Todo indica que SI, los hogares completamente automatizados y computarizados permitirn
contar con robots para el aseo, la seguridad, el entretenimiento y la compaa

19. Acertar Thomas Malthus?
La pobreza y hambruna universal que predice Malthus tal parece ser ms un problema de
distribucin que de otra cosa, sin embargo, en los prximos 50 aos la poblacin mundial
crecer en 3000 millones de personas, acercndose la humanidad a los lmites que la tierra
puede soportar.

What Would a Green Future Look Like?
ities from Copenhagen to San Francisco are already exploring the possibilities for a
, 2, 3 Energy in the future will derive from pollution-free, renewable sources such as solar

C
cleaner, healthier world with everything from water harvesting roofs to wind turbines.
Here's TIME's sneak peek at what a green world might look like by the year 2025.
By MEGHAN MURPHY

1
power, hydrogen, hydroelectricity and wind. Moving toward this goal, the North Carolina
Solar Commission has developed two solar-powered houses that each have a total monthly
bill of only $25 for heating, ventilation and air-conditioning. In Iowa, the Midwest Wind
Energy Program generates more than 325,000 kilowatt-hours of energy with wind turbines,
and saved consumers $950 in equivalent coal use in May 1996.

4 Thanks to the Internet, telecommuting from home will be both environmentally sound
Inexpensive, efficient mass transit, plus bikes and electric and fuel-cell-powered vehicles,
An environmentally friendly future will not know the meaning of waste: Everything will
While e-commerce provides a no-emission purchase with the click of a mouse,
, 9 Rooftop and backyard gardening will increase local food production and provide
0,11 Green communities of the future will dramatically conserve energy and water.
ow Hot Will It Get?
and feasible. Redmond, Wash., has already made big gains in improving air quality with a
telecommuting program that reduces highway travel by 23,400 miles per year.

5
will alleviate traffic congestion and space shortages and enhance air quality. Honda
Corporation is testing a system in Japan that allows communities to share electric cars and
bicycles. In Copenhagen, public bikes are financed with advertisements on the wheels and
frames.

6
be integrated into recycling programs. Hot water left over from power generation will heat
buildings, as it does already for 70 percent of structures in Copenhagen. Other waste water
can be treated by fish, plants and snails in solar aquatic water treatment systems, a system
already in place in Bear River, Nova Scotia.

7
environmentalists worry that the increased consumption it brings about will prove
inconsistent with a green future. Malls, on the other hand, will be a two-way operation:
When you're through using any product you buy there, the stores will be required to take it
back for recycling. Compact cities will have retail centers within walking distance that use
local products. The city of Civano, Az., already has a retail space that is integrated with
offices, a school and parks.

8
healthy alternatives for city residents. To reduce pesticide use and cut down on fossil fuel
pollution from transporting food long distances, San Francisco activists and policy makers
are supporting local organic farms. Following their lead, we will use natural pesticides and
fertilizers to cultivate crops instead of chemicals. To reduce irrigation and water waste,
agriculture will also focus on indigenous plant species.

1
Homes, such as those constructed for sustainable community of Civano, Az. Will be built
with eco-friendly materials such as straw and adobe and will use water harvesting roofs to
collect rain for use in air-conditioning systems. Furnishings sucha as carpeting will come
from reclycled materials. Builders will leave mature trees and native plants intact or replant
them.

H
but the potential perils of climate change make it unwise for us to
ot so long ago, people talked about global warming in apocalyptic terms--imagining the
No one knows for sure,
ignore the greenhouse effect
by JON KRAKAUER

N
Statue of Liberty up to its chin in water or an onslaught of tropical diseases in Oslo.
Recently, however, advances in our understanding of climate have moved global warming
from a subject for a summer disaster movie to a serious but manageable scientific and
policy issue.

Here's what we know. Since sunlight is always falling on the earth, the laws of physics
decree that the planet has to radiate the same amount of energy back into space to keep the
books balanced. The earth does this by sending infrared radiation out through the
atmosphere, where an array of molecules (the best known is carbon dioxide) form a kind of
blanket, holding outgoing radiation for a while and warming the surface. The molecules are
similar to the glass in a greenhouse, which is why the warming process is called the
greenhouse effect.

The greenhouse effect is nothing new; it has been operating ever since the earth formed.
Without it, the surface of the globe would be a frigid -20C (-4F), the oceans would have
frozen, and no life would have developed. So the issue we face in the next millennium is
not whether there will be a greenhouse effect, but whether humans, by burning fossil fuels,
are adding enough carbon dioxide to the atmosphere to change it (and our climate) in
significant ways.

You might think that, knowing what causes greenhouse warming, it would be an easy
matter to predict how hot the world will be in the next century. Unfortunately, things aren't
that simple. The world is a complex place, and reducing it to the climatologist's tool of
choice--the computer model--isn't easy. Around almost every statement in the greenhouse
debate is a penumbra of uncertainty that results from our current inability to capture the full
complexity of the planet in our models.

There is one fact, though, that everyone agrees on: the amount of carbon dioxide in the
atmosphere is increasing steadily. It is near 360 parts per million today, vs. 315 p.p.m. in
1958 (when modern measurements started) and 270 p.p.m. in preindustrial times (as
measured by air bubbles trapped in the Greenland ice sheet).

An analysis of admittedly spotty temperature records indicates that the world's average
temperature has gone up about 0.5C (1F) in the past century, with the '90s being the
hottest decade in recent history. This fact is quoted widely in the scientific community,
although there are nagging doubts even among researchers. Recent satellite records, using
different kinds of instrumentation, fail to show a warming trend.

If we accept that there has been moderate warming, we turn to computer models to see if
humans are to blame and what will happen to the earth's climate in the future. These models
are complex because climate depends on thousands of things, from Antarctic sea ice to sub-
Saharan soil conditions. While the electronic simulations are monuments to the ingenuity
and perseverance of their creators, they provide us with, at best, a fuzzy view of the future.
They have difficulty handling factors like clouds and ocean currents (two major influences
on climate), and if you fed the climate of 1900 into any of them, they couldn't predict the
climatic history of the 20th century. Like everything else in this frustrating field, the
models' limitations force us to make important decisions in the face of imperfect
knowledge.

The most authoritative predictions about future warming come from the Intergovernmental
Panel on Climate Change, a worldwide consortium of more than 2,000 climate scientists.
The current forecast is that by 2100 the earth's temperature will go up 1 to 3.5C (2 to
7F), with the best guess being an increase of 2C (4F).

At the lower end of this predicted warming range, the temperature rise would take us back
to the conditions that existed between A.D. 950 and 1350, when the climate was 1C (2F)
warmer than it is now. This time period is regarded as one of the most benign weather
regimes in history. To find temperature swings at the upper end, you have to go back
10,000 years, to when the earth was exiting the last Ice Age. Temperatures during the Ice
Age were 5C (10F) cooler than they are now, and there was a series of incidents during
which global temperatures changed as much as 10F in a matter of decades. If that were to
happen now, expanding oceans might flood coastlines and generate fiercer storms. And as
weather patterns changed, some places could get wetter and some dryer, and the ranges of
diseases could expand. Civilization has seen--and endured--such changes in the past, but
they may come much more swiftly this time, making it harder to withstand the jolts.

The main reason for the spread in the IPCC predictions is uncertainty about how much
carbon dioxide will be added to the atmosphere by human activity, because how we will
respond to the threat of climate warming is the greatest imponderable of all. We can
probably develop technologies to deal with excess carbon--some scientists talk about
removing it from smokestacks and stashing it underground--but the most direct way to
control carbon dioxide in the atmosphere is not to put it there in the first place. This is the
point of the 1997 Kyoto Protocol--signed by 84 nations but not ratified by the U.S. Senate--
which would limit developed countries' carbon emissions from cars, power plants and other
major users of fossil fuels.

It makes no sense to overreact to the prospect of global warming, but it makes no sense to
ignore it either. A prudent policy that stresses conservation and alternate energy sources
seems to me to be wise insurance in an uncertain age. After all, our grandchildren will
thank us for developing high-mileage cars, energy-efficient appliances and cheap solar
energy, no matter how the future of global warming plays out.

James Trefil is a George Mason University physics professor and author of 101 Things You
Don't Know About Science and No One Else Does Either.

Got Any Good Drugs?
Gene-spliced medicines will replace what surgeons do and fit like a tailored suit
by FREDERIC GOLDEN

It was a blast of a New Year's party, but now that 2024 is history and 2025 is here, you're
feeling terrible. It's not just a hangover. You're sweating. You're listless. You're aching all
over. The doctor nods sympathetically while she pokes around here and there as physicians
have done ever since Hippocrates. Then she goes high tech: "Your gene card, please?" she
asks.

The computer digests the rectangle of plastic you hand her. "Yes, you've got a bad case of
flu," she says reassuringly. "I'm having the pharmacy create a drug for you. It'll be ready
before you're dressed, and you should be as good as new by tomorrow."

Maybe that's not exactly the way pills will be dispensed 25 years from now, but you can be
sure that at molecular biology's current pace, it will be something like that. By then
scientists will have decoded the entire human genome--all 140,000 or so genes that largely
say who we are and which of 4,000 diseases our flesh is heir to. They will also have found
exactly where common disease-causing errors lie along the genome's long, interlocked
chains of DNA.

That will have enormous practical consequences. Your genetic profile, recorded on a chip,
will let doctors--or, more likely, their computerized diagnostic tools--determine your exact
level of risk for a particular disease and which proteins and enzymes your body lacks.
There will be no more wasteful trial and error, with costly pills winding up in the trash
because they produced unwelcome reactions or didn't work for you. Instead you'll get
customized prescriptions, created to "fit" on the very first try, like a Savile Row suit.

And that's just for starters. In 2025's genetically based pharmacology, you'll not only have
your pick of the old standbys--tranquilizers, antihistamines, painkillers and antibiotics, all
compounded to your personal specs--but you'll see all sorts of new capsules and tablets for
virtually every ailment and condition. These will range from mood and pleasure enhancers-
-legal and otherwise--for the pill poppers of the future to new medications for diseases
likely to be much more common in an aging population, like Alzheimer's, cardiovascular
problems and cancer.

"It will be a geriatric world, at least in wealthy countries, with at least 20% of the people 60
years or older," says Stanford chemist Carl Djerassi, synthesizer of the birth-control pill.
For that reason, he predicts, drug companies will turn from contraception to conception in
an effort to help older women have babies. As for aging men, they'll have at their disposal
libido and sex-performance boosters that will make Viagra seem like baby aspirin.

Meanwhile, genetically engineered drugs will increasingly replace the scalpel for removal
of tumors or cosmetic surgery like hair transplants. Indeed, after much hype and few
results, gene therapy is finally making major strides--although not the way doctors thought
it would. Once they hoped to cure diseases by repairing defective genes. Now it seems a lot
easier to determine what proteins the broken genes should be making and replace them
instead. Dr. Jeffrey Isner at St. Elizabeth's Medical Center in Boston has achieved
remarkable results with a protein called vascular endothelial growth factor (VEGF2) in
restoring circulation in the legs of diabetics and, more impressively, stimulating new vessel
growth in patients with severe heart disease. Says former Eli Lilly chairman Randall
Tobias: "The day will come when we regard all surgeries, except [treatment of] trauma, as
failures of the pharmaceutical industry."

Unlike traditional medications, the brave new drugs will be designed "rationally" on
computer screens, using gene information as a blueprint. VEGF2, for example, is a
synthetic gene that makes a protein that in turn stimulates new vessel growth. In a few
years, predicts William Haseltine, the biotech industry's champion optimist and CEO of
Human Genome Sciences, based in Rockville, Md., we will have genetically based drugs
for almost every serious ailment--"things we couldn't really work on well before, whether
it's osteoporosis or Alzheimer's." Nor will these drugs simply attack symptoms, as aspirin
does. "That's a chemical crutch," he says. In the new genomics, as Haseltine calls it, "it's the
human gene, the human protein, the human cell--and not the chemical--that is used as the
medicine."

The genomics may be new, but not the economics. When you take your gene card to the
pharmacy in 2025 for flu pills, bring a credit card too. Made-to-fit drugs won't be cheap.
Some of us may have to make do with two aspirins and all the liquids we can drink.

Frederic Golden, a former assistant manager of DISCOVER, is a TIME contributor who
has followed the rise of molecular biology.

What Will Happen To Alternative Medicine?
Though some therapies will become mainstream, most will go the way of snake oil and
orgone booths
by LEON JAROFF

Ginseng, ginkgo biloba and homeopathic potions have become as American as apple pie,
but will anyone still be taking them in 2025? Advocates of alternative medicine, buoyed--
and enriched--by the $30 billion Americans spend annually on unconventional therapies,
confidently predict that herbal remedies and homeopathic potions will not only flourish in
the coming decades but will also take their rightful place alongside vaccines, antibiotics,
gene therapy and the other tools of modern medicine.

Baloney. "Alternative medicine" is merely a politically correct term for what used to be
called quackery. Any alternative therapy that can be proved valid will swiftly be
incorporated into mainstream medicine. Any "medicine" that is based on myth, irrationality
and deception will eventually be rejected. "Once the public finds out what homeopathy is,"
predicts Dr. John Renner, head of the National Council for Reliable Health Information,
"once they find out that chlorophyll is necessary for plant life but not human life, they're
going to turn on these alternative groups."

Public disenchantment with homeopathy, for example, will grow when consumers of
homeopathic potions finally wise up to the fact that in many cases they are paying big
bucks for a highly diluted mixture that is essentially pure water, and that homeopathy is
based on primitive and false 19th century beliefs.

When patients discover that their "therapeutic touch" practitioner has not been
manipulating their "human energy field"--a nonexistent entity--but merely making useless
hand motions in the vicinity of their bodies, they will reject mysticism and move toward
more rational therapy. And when herbal medicine devotees become aware that any useful
ingredient in their unregulated leaves, stem and root mixtures can be isolated and made
available as regulated drugs, labeled with full information about content and proper dosage,
they will begin making fewer trips to the health-food store.

Cost is also an issue. Managed-care providers, eager to cash in on the alternative boom, are
luring subscribers by offering to cover some of these dubious treatments. But most
consumers of alternate products use conventional medicine too, and when it becomes
evident that the alternatives are not cost effective and at best produce only a placebo effect,
the HMOs will drop them in a heartbeat. Says William Jarvis, a professor of public health
at California's Loma Linda University: "Useless procedures don't add to the outcome, just
to the overhead.

While the purveyors of this voodoo medicine today point with pride to the fact that most
U.S. medical schools, influenced by research grants and public opinion, have launched
courses in alternative medicine, the result will not be what they expect. Legitimate medical
schools--and most of them are--will dispassionately dissect the alternatives and evaluate
their effectiveness. In so doing, they will breed new generations of doctors who will urge
patients to be skeptical about false claims and bogus science.

Public skepticism, in turn, will spike the guns of the friends of alternative medicine in the
U.S. Congress who have, through legislation and intimidation, harassed and weakened the
Food and Drug Administration. New laws will restore the power of the FDA not only to
ban dangerous therapies pre-emptively but also to remove patently worthless products from
health-food-store and drugstore shelves.

The coming years will also see the demise of the quack-laden Office of Alternative
Medicine, which seven years ago was foisted on the reluctant National Institutes of Health,
largely at the insistence of Tom Harkin, the otherwise sensible Senator from Iowa who
believes in the curative powers of bee pollen. Talk about getting stung. Taxpayers will be
incredulous when they become aware that after spending millions of dollars in its first
seven years "investigating" highly questionable alternative therapies, the OAM failed to
validate or--more significant--invalidate any of them (with the possible exception of
acupuncture). And they'll be furious when they recall years from now that in 1998, as a
reward from Congress for its failures, the agency was quietly elevated to a full-fledged NIH
Center and given a budgetary raise: to $50 million annually.

Charades can't persist forever. In the years to come, as conventional medicine continues to
make rapid advances and as the public becomes better informed about the deception and
outright medical ignorance of many of these hucksters, alternative medicine will be
consigned to what indeed is its rightful place: alongside snake oil, orgone booths and
laetrile in the dustbin of medical history.

TIME contributor Leon Jaroff was founding editor of DISCOVER, in which his "Skeptical
Eye" column skewered bogus science

Will Christopher Reeve Walk Again?
Spinal injury once meant a lifetime in a wheelchair. Down the road, many of the paralyzed
will be on their feet
by JEFFERY KLUGER

You can't see the tiny injury between the first and second vertebrae of Christopher Reeve's
neck, and even if you could, it wouldn't look like much. But Reeve is always aware of the
little wound. Ever since he sustained it in a 1995 riding accident, the actor best known for
playing Superman has had virtually no movement or sensation below the neck and has been
largely dependent on a ventilator to breathe.

Reeve, however, doesn't plan to stay that way. On Sept. 25, 2002, his 50th birthday, he
hopes to rise to his feet, lift a glass and toast the people who have helped him through the
past few years. "I wouldn't bet the farm on it," he says. "But there's a chance it might
happen."

Remarkably, there are other people--sober, scientific people--who agree. For centuries,
doctors have considered the spinal cord an impossible thing to heal. Choked by proteins
that block regeneration, denied other proteins that foster growth, dammed up by scar tissue
at the site of an injury, a spinal cord that gets hurt tends to stay hurt. But for more than a
decade, researchers have been learning to overcome these problems, figuring out ways to
heal damaged cords and switch the power back on in spines long since gone dead. Even if
Reeve and others don't walk by 2002, there is no limit to what may happen in the decades
that follow. Says clinical neurologist Ira Black of the Robert Wood Johnson Medical
School in Piscataway, N.J.: "There's been a revolution in our view of the spinal cord and its
potential for recovery."

Much of what is behind the new hope is a better understanding of why the cord doesn't heal
itself. In 1988 neuroscientist Martin Schwab of the University of Zurich isolated substances
in the central nervous system whose sole purpose appears to be to block growth. In a
healthy spine, the chemicals establish boundaries that regulate cell growth. After an injury,
they do little but harm. In recent years, however, Schwab has developed antibodies that
neutralize the growth blockers, allowing regeneration to occur.

Elsewhere, researchers are looking at ways to hasten the healing permitted by these
antibodies. Peripheral nerves outside the cord heal themselves all the time, thanks to
regenerative bodies called Schwann cells. Scientists at the Salk Institute in San Diego and
at the Miami Project to Cure Paralysis at the University of Miami are experimenting with
harvesting Schwann cells and transplanting them to the site of a spinal injury, where they
can serve as a bridge across the wound.

Whether growing nerves will reconnect properly--ensuring that a signal sent to a leg doesn't
wind up at an arm--has always been a cause for concern. But there may be little reason to
worry. Researchers now believe that advancing nerve endings carry chemical markers that
guide them straight to receptors at their destination. "It's as if the body wants to be whole,"
says Reeve.

Skeptics warn against too much giddy hope that damaged spines will become whole
anytime soon. Treatments may be many years off, they caution, and only incrementally
helpful--restoring wrist motion to a person who has none, for example. Most researchers,
though, are more optimistic. Over the course of 10 years, they say, the riddles of the cord
have been solved. The question now is not what the treatments for an injured spine should
be, but how best to implement them. At hospitals such as the Karolinska Institute in
Sweden and the University of Florida, human trials are already getting under way. Studies
at other hospitals are sure to follow. Says Black: "The astounding progress over the past
decade dwarfs the progress of the past 5,000 years." Reeve may not stand up the day he
turns 50, but the real possibility does exist that he will spend a future birthday on his feet.

Senior writer Jeffrey Kluger's latest book, about moons, is called "Journey Beyond Selene"

Can I Grow A New Brain?
Probably not, but the latest research in neural stem cells suggests new ways to repair
damage and even re-create whole parts of the brain
by PAUL HOFFMAN

Long before neuroscientists took the first tentative steps toward brain-tissue transplants (let
alone dared to think about whole-brain transplants), mischievous philosophers were
plumbing the consequences of such 21st century surgery. "In a brain-transplant operation,
is it better to be the donor or the recipient?" these wags asked. To put it another way, if you
and Tom manage to swap brains, who is now the real you? The man with your brain
attached to Tom's body or the man with Tom's brain joined to your body?

The real you, it can be argued, is the man with Tom's body; he's the one who knows the
most intimate and embarrassing details of your life. The man with your former body may
now have a bum knee, but he won't know why (that misguided dive you took playing touch
football to impress your girlfriend in 1971). Summing up his own theoretical musings about
the wisdom of a brain swap, Tufts University philosopher Daniel Dennett concluded that it
was not an even exchange. "It was clear that my current body and I could part company, but
not likely that I could be separated from my brain," he wrote. "The rule of thumb [is] that in
a brain-transplant operation, one want[s] to be the donor, not the recipient."

Whole-brain transplants are still science fiction. "I never like to say that something's
impossible," says Dr. Evan Snyder, a neuroscientist at Harvard Medical School and
Children's Hospital in Boston. "I've been burned too many times by categorically ruling
something out. And yet I can't imagine that 20 years from now human-brain transplants will
be possible. The connections required are just too complex; they number in the millions.
But the future of brain-cell transplants--that's another matter."

So far, medical science has had only mixed results with brain-cell transplants. Take the
treatment of Parkinson's disease, for example, a condition that is gradually depriving more
than 1 million Americans of their ability to move and speak. The disease is caused by the
slow deterioration of brain cells that produce dopamine, a chemical essential for the
transmission of messages from the brain to the rest of the body. A decade ago, Swedish
researchers started implanting dopamine-producing cells from human fetuses into the brains
of Parkinson's patients. The treatment improved the mobility of many of the patients but
usually only partly and in some cases not at all. Even if the treatment becomes more
successful (and the ethically charged issue of mining aborted fetuses is overcome), it can
hardly become routine. For each patient, cells from as many as 15 fetuses must be
harvested and transplanted almost immediately.

Yet as the Decade of the Brain proclaimed by President George Bush draws to a close,
neuroscientists are increasingly sanguine that in George Jr.'s lifetime, brain-cell transplants
may reverse, if not cure, a host of neurological diseases such as Parkinson's and
Alzheimer's, as well as brain damage caused by strokes and head injuries. Even a year ago,
such a sweeping claim might have been dismissed as nonsense. But that was before last
fall's discovery that the fetal human brain contains master cells (called neural stem cells)
that can grow into any kind of brain cell. Snyder extracted these cells and "mass-produced"
them in the lab. His hope is that the cells, when injected into a damaged adult brain, will
turn themselves into replacements for cells that are dead or diseased.

When most physicians got their training, they were taught that the adult brain is rigid, that
its nerve cells, or neurons, could never regenerate themselves. If you nick your finger with
a knife, the cut will heal in a few days because your skin has the ability to generate new
cells. But when something bad happens to the brain, it doesn't repair itself. Why's that?
"The brain is not plastic," says Snyder. "It doesn't make new cells. You are born with more
brain cells than you need, and you lose them progressively and get dumber and dumber as
you get older--or so went the conventional wisdom."

The path to overturning the dogma of the rigid brain was circuitous. In the early 1960s
biologists discovered that new cells were being made in two areas of the adult rat brain, but
the discovery was regarded as an unimportant peculiarity of the rodent brain and quickly
forgotten. In the mid-1980s, Fernando Nottebohm of Rockefeller University brought new
respect to the term birdbrain by demonstrating that the brain of an adult canary has the
astonishing ability to regenerate new nerve cells at a rate of up to 20,000 a day. Other
researchers reported similar regenerative ability in fish and reptiles, but there was still no
evidence that evolution had passed on this ability to the human brain. Indeed, most
neuroscientists wouldn't even entertain the possibility of new cell growth in the human
brain on the grounds that any additional cells would disrupt the brain's complex wiring.

Snyder was not so sure. "I'm an optimist. Why would evolution have been parsimonious in
depriving the human brain of the power of self-healing? I was a pediatrician before I
became a neuroscientist. As a pediatrician, I was impressed by how much plasticity there
really must be in the human brain. Pediatricians know that damage to the infant brain
doesn't have the same outcome as damage to the adult brain. If a newborn has a stroke,
even in the cortex [an area important to higher intellectual functions], he or she may sustain
it and develop quite normally. The exact same injury would put an adult in a wheelchair. I
wondered if the source of the brain's apparent plasticity was at the level of the single cell."

Different kinds of blood cells, red and white, come from a single kind of stem cell in bone
marrow. These chameleon-like stem cells transform themselves into whatever kind of blood
cells the body needs. The skin and liver have their own stem cells. "Maybe there is a brain
stem cell, a mother cell that gives rise to all types of brain cell," Snyder says he wondered.
"I wanted to find this cell and harness it to repair injured brains."

In 1992 Snyder announced in print that his lab had removed stemlike cells from mouse
brains and had grown them in a culture. Snyder then teamed up with Dr. Jeff Macklis, a
colleague at Harvard Medical School who had engineered a strain of mouse whose neurons
died off in a tiny region of the cortex where cells were not known to regenerate. Snyder
injected the stem cells into the mice. Like heat-seeking missiles, the cells rapidly sought out
the injured part of the cortex and transformed themselves into healthy neurons. "That's the
beauty of stem cells," says Snyder. "You don't have to find the injury--the stem cells do it
for you. They instinctively home in on the damage even from great distances." In another
experiment, Snyder used stem cells to cure mice of a disease that resembled multiple
sclerosis. And in his latest, unpublished work, Snyder introduced massive brain injuries in
mice--including strokes to the cortex--and cured them with stem cells.

"Where was this all leading?" Snyder says he asked himself many times. "In 20 years
would I have done nothing more than create a thriving colony of healthy, smart mice that
are free of brain disease? You can't take it for granted that every medical advance in mice
will also benefit people." But the evidence started mounting. Over the past three years,
researchers have discovered that brain cells regenerate in primate-like tree shrews,
marmoset monkeys and rhesus monkeys, all of which are closer to us on the evolutionary
scale than are mice (except in Kansas). The real payoff came late last year, when Fred Gage
at the Salk Institute and his colleagues in Sweden reported that nerve cells are regenerated
in the human hippocampus (a portion of the brain related to memory and learning).

Gage's finding--coupled with Snyder's report that same month of stem cells in the fetal
human brain--has stood neuroscience on its head, so to speak. As has the latest finding,
announced last month by researchers at Princeton, that adult macaque monkeys are
constantly growing new cells in the highest and most complex area of the brain, the
cerebral cortex. Snyder is now flush with confidence that neuroscience will ultimately cure
many, if not all, diseases of the human brain. "By the year 2020 I hope we will have an
active way of treating damaged brains. If we can further understand brain-cell regeneration
and harness the process intelligently, then re-creating the brain, or at least parts of the brain,
may lie within our grasp. Obviously there are lots of hurdles to overcome. But if we can
capture and bottle the brain's now recognized plasticity, we can cure all sorts of things,
maybe even damaged psyches."

The idea of implanting brain stem cells, while not as dramatic as swapping whole brains,
also raises intriguing philosophical questions. "Sometimes at seminars when I talk about
my work," says Snyder, "somebody will ask me whether the introduction of these stem
cells will alter memory." Do the newly generated cells distort or erase old memories? Or
will the transplanted stem cells bring with them memories of their upbringing in a Petri
dish?

"All this is meta-neuroscience," says Snyder, laughing. "But I tend to think that the cells
will take their cue from the host that houses them" rather than remembering their past lives
like so many cellular Shirley MacLaines. So, in the case of brain-cell implants, it would
seem, it is better to be the recipient than the donor.

Paul Hoffmann is president of Encyclopaedia Britannica and author, most recently, of "The
Man Who Loved Only Numbers"

Will There Be Any Wilderness Left?
Yes, there will be a few untamed spots, and if we're lucky, very few people will go and visit
them
by JON KRAKAUER

Bent like an arthritic thumb high above the antarctic ice cap, the mountain's uppermost
point was so small and precarious that it could accommodate only one person at a time. So I
shivered on a ledge in the subzero breeze and waited for my partners, first Alex and then
Conrad, to climb the final 20 ft. to the summit. We'd been on the move for 14 hours. My
back hurt, and I had lost all feeling in my toes. But as my eyes wandered across the frozen
vastness of Queen Maud Land, a sense of profound contentment radiated from somewhere
beneath my solar plexus. There was nowhere on earth that I would have preferred to be.

Today new viruses are coming out of nature and "discovering" the human species, while in
hospitals and in jungle clinics exceedingly powerful mutant bacteria are emerging that can't
be treated with antibiotics. In the past decade, at least 50 new viruses have appeared,
including Ebola Ivory Coast, Andes virus, hepatitis G, Fakeeh, Pirital, Whitewater Arroyo,
Hendra virus, Black Lagoon virus, Nipah and Oscar virus. This summer West Nile virus
showed up for the first time in the western hemisphere, when it was discovered in New
York City.

The mountain, called the Troll Castle, is an unearthly fin of weathered granite that pokes a
vertical mile from its icebound surroundings. Only a handful of people knew, or cared, that
it existed; fewer still had actually laid eyes on the peak. Alex, Conrad and I were the first
who had gone to the trouble to climb it, and the view from the top was ample reward.
Countless other rock towers, equally strange and beautiful, rise from the ice in all
directions, resembling gargantuan sailboats plying a chalk-white sea.

For a month we'd been climbing and exploring in this corner of Antarctica. To visit such a
wilderness in the waning moments of the 20th century struck us as a rare and fleeting
privilege--an incredible stroke of good luck. Keeping this firmly in mind, we went to
extraordinary lengths to minimize our impact on the place so that others would find it in a
similarly pristine condition. When we departed, we even packed out our accumulated feces.
I couldn't help thinking, however, that 100 years in the future, or even 50, a genuine
wilderness experience will probably be hard to come by in Queen Maud Land. Or
anywhere else, for that matter.

A few remote places like Antarctica still exist as true wilderness: the Queen Elizabeth
Islands in the Canadian Arctic, pockets of the Mato Grosso bush in central Brazil, bits of
the Tibetan Plateau. Much of this wilderness is so huge and empty and emphatically
inhospitable that it is difficult to picture its ever succumbing to the crush of civilization.
But the same could have been said of the Grand Canyon in 1869, when John Wesley
Powell braved murderous rapids and myriad other hazards to become the first man to
navigate the Colorado River.

Powell could scarcely have imagined that a century after his feat, more than 2 million
tourists would visit the Grand Canyon annually--among them families with small children
who would float down the once fearsome Colorado as a summer lark. During the past 30
years, annual visitation to the Grand Canyon has ballooned from 2 million to more than 5
million. If you want to paddle down the Grand Canyon on your own, without hiring a
commercial outfitter, the waiting list for boating permits is now so long that you won't be
able to launch your raft until 2012 at the soonest.

The destiny of wild places in the coming century can be read in the numbers. The 6 billion
people living on the planet are projected to swell to 9 billion by 2050. The pressure to
exploit the world's remaining wilderness for natural resources, food and human habitation
will become overwhelming. But bulldozers and chain saws aren't the only threats. A new
menace has emerged from the least likely quarter; in many cases, the very people who care
most passionately about empty places are hastening their demise.

Not so many decades ago, those advocating the preservation of wilderness constituted a
small minority and were considered to be on the radical fringe. Such pastimes as
mountaineering and backpacking were thought to be the exclusive domain of outcasts,
anarchists and social misfits. But there has been a broad-based shift in public opinion of
late. Polls show that a majority of Americans now place a high priority on protecting the
environment.

Backcountry activities have become extremely trendy, a fad that has been eagerly abetted
by Madison Avenue. These days it's impossible to turn on a television or open a magazine
without being assaulted by a barrage of ads that use skillfully packaged images of
wilderness activities to rev the engine of consumerism. In 1851, when Henry David
Thoreau declared, "In wildness is the preservation of the world," he could not have foreseen
that wilderness, as an idea, would one day be used to sell everything from suvs to soda pop.
Disconcerting though this development may be, it happens to come with a substantial
upside; because wilderness is now esteemed as something precious and/or fashionable, wild
places are more often being rescued from commercial exploitation. But if the wilderness
fad has made it easier to protect wild country from development, it has made it harder to
protect wild country from the exploding ranks of wilderness enthusiasts. Increasingly,
places once considered enduringly back of beyond are now crowded with solitude seekers.

As more and more people flock to the backcountry, habitat for native plants and wildlife is
inevitably compromised. To safeguard natural habitat, it becomes necessary for government
agencies to exercise intervention and control. Inevitably, and justifiably, strict limits are
placed on backcountry use. Camping, hiking, boating, hunting, fishing and climbing are
restricted. Campfires are forbidden. Dogs must be leashed or are simply banished
altogether. In the mountains above the Colorado city where I make my home, dog owners
are now required by law to collect their pets' excrement and carry it out. In a growing
number of places, as previously mentioned, responsible behavior now dictates that humans
carry out their own excrement as well.

Of course, when wilderness is so intensely managed, it ceases to be wild. It becomes a
toothless simulacrum. It becomes a park. On an increasingly crowded planet there is
probably no alternative. It is simply an unhappy fact of life on the cusp of the 21st century.

As wilderness dwindles and disappears, more is at stake than the fate of endangered
species. Other, less tangible things stand to be lost as well. Empty places have long served
as a repository for a host of complicated yearnings and desires. As an antidote to the
alienation and pervasive softness that plague modern society, there is no substitute for a trip
to an untrammeled patch of backcountry, with its attendant wonders, privation and physical
trials.

In 1945 the influential forester Aldo Leopold wrote, "I am glad I shall never be young
without wild country to be young in. Of what avail are 40 freedoms without a blank spot on
the map?" Fifty-four years later, it's not so easy to find wild country to be young in. For
now, however, a few tracts of wilderness still endure. We should be grateful for this and
appreciate them as long as we can.

Jon Krakauer is the author of "Into Thin Air" and "Into the Wild."

Will We Still Eat Meat?
Maybe not, if we wake up to what the mass production of animal flesh is doing to our
health--and the planet's
by ED AYRES

When Julius Caesar made his triumphal entrance into Rome in 45 B.C., he celebrated by
giving a feast at which thousands of guests gorged on poultry, seafood and game. Similar
celebrations featuring exorbitant consumption of animal flesh have marked human
victories--in war, sport, politics and commerce--since our species learned to control fire.
Throughout the developing world today, one of the first things people do as they climb out
of poverty is to shift from their peasant diet of mainly grains and beans to one that is rich in
pork or beef. Since 1950, per capita consumption of meat around the globe has more than
doubled.

Meat, it seems, is not just food but reward as well. But in the coming century, that will
change. Much as we have awakened to the full economic and social costs of cigarettes, we
will find we can no longer subsidize or ignore the costs of mass-producing cattle, poultry,
pigs, sheep and fish to feed our growing population. These costs include hugely inefficient
use of freshwater and land, heavy pollution from livestock feces, rising rates of heart
disease and other degenerative illnesses, and spreading destruction of the forests on which
much of our planet's life depends.

First, consider the impact on supplies of freshwater. To produce 1 lb. of feedlot beef
requires 7 lbs. of feed grain, which takes 7,000 lbs. of water to grow. Pass up one
hamburger, and you'll save as much water as you save by taking 40 showers with a low-
flow nozzle. Yet in the U.S., 70% of all the wheat, corn and other grain produced goes to
feeding herds of livestock. Around the world, as more water is diverted to raising pigs and
chickens instead of producing crops for direct consumption, millions of wells are going dry.
India, China, North Africa and the U.S. are all running freshwater deficits, pumping more
from their aquifers than rain can replenish. As populations in water-scarce regions continue
to expand, governments will inevitably act to cut these deficits by shifting water to grow
food, not feed. The new policies will raise the price of meat to levels unaffordable for any
but the rich.

That prospect will doubtless provoke protests that direct consumption of grain can't provide
the same protein that meat provides. Indeed, it can't. But nutritionists will attest that most
people in the richest countries don't need nearly as much protein as we're currently getting
from meat, and there are plenty of vegetable sources--including the grains now squandered
on feed--that can provide the protein we need.

Unfortunately, this isn't just a matter of productive capacity. Mass production of meat has
also become a staggering source of pollution. Maybe cow pies were once just a pastoral
joke, but in recent years livestock waste has been implicated in massive fish kills and
outbreaks of such diseases as pfiesteria, which causes memory loss, confusion and acute
skin burning in people exposed to contaminated water. In the U.S., livestock now produce
130 times as much waste as people do. Just one hog farm in Utah, for example, produces
more sewage than the city of Los Angeles. These megafarms are proliferating, and in
populous areas their waste is tainting drinking water. In more pristine regions, from
Indonesia to the Amazon, tropical rain forest is being burned down to make room for more
and more cattle. Agriculture is the world's biggest cause of deforestation, and increasing
demand for meat is the biggest force in the expansion of agriculture.

What has proved an unsustainable burden to the life of the planet is also proving
unsustainable for the planet's dominant species. In China a recent shift to meat-heavy diets
has been linked to increases in obesity, cardiovascular disease, breast cancer and colorectal
cancer. U.S. and World Health Organization researchers have announced similar findings
for other parts of the world. And then there are the growing concerns about what happens to
people who eat the flesh of animals that have been pumped full of genetically modified
organisms, hormones and antibiotics.

These concerns may seem counterintuitive. We evolved as hunter-gatherers and ate meat
for a hundred millenniums before modern times. It's natural for us to eat meat, one might
say. But today's factory-raised, transgenic, chemical-laden livestock are a far cry from the
wild animals our ancestors hunted. When we cleverly shifted from wildland hunting and
gathering to systematic herding and farming, we changed the natural balances irrevocably.
The shift enabled us to produce food surpluses, but the surpluses also allowed us to
reproduce prodigiously. When we did, it became only a matter of time before we could no
longer have the large area of wildland, per individual, that is necessary to sustain a top-
predator species.

By covering more and more of the planet with our cities, farms and waste, we have
jeopardized other top predators that need space as well. Tigers and panthers are being
squeezed out and may not last the coming century. We, at least, have the flexibility--the
omnivorous stomach and creative brain--to adapt. We can do it by moving down the food
chain: eating foods that use less water and land, and that pollute far less, than cows and pigs
do. In the long run, we can lose our memory of eating animals, and we will discover the
intrinsic satisfactions of a diverse plant-based diet, as millions of people already have.

I'm not predicting the end of all meat eating. Decades from now, cattle will still be raised,
perhaps in patches of natural rangeland, for people inclined to eat and able to afford a
porterhouse, while others will make exceptions in ceremonial meals on special days like
Thanksgiving, which link us ritually to our evolutionary and cultural past. But the era of
mass-produced animal flesh, and its unsustainable costs to human and environmental
health, should be over before the next century is out.

Ed Ayres is editorial director of the Worldwatch Institute and author of "God's Last Offer:
Negotiating for a Sustainable Future."

Can I Replace My Body?
Bone and Cartiledge
TODAY: Injection of bone growth factors into jaw and other fracture areas. Researchers
can also grow cartilage in the lab in thin sheets, but it's too weak to be functional in the
body
TOMORROW: Coaxing the body to grow bone and cartilage on biodegradable scaffolds
infused with a mix of stem cells and growth factors
Blood Vessels
TODAY: Grown in the lab from pig cells and synthetic-polymer matrix
TOMORROW: Grown in the lab from stem or precursor cells to avoid rejection by the
immune system
Skin
TODAY: Sheets grown in the lab from human and synthetic-polymer matrix
TOMORROW: Grown by the body from stem or precursor cells and growth factors
Penis
TODAY: Penile implants and medication to maintain erection. Surgery to reattach a
severed penis; skin grafts to recover urinary, but not sexual, function if penis is not
recovered
TOMORROW: Genetically engineered tissue grown in the lab and attached for final
growth to form fully functional penis
Limbs
TODAY: Prosthetics wired to peripheral nervous system
TOMORROW: Prosthetics wired directly to motor portions of the brain to improve control
and simulate the sensations of touch, pain, etc.
Nerves
TODAY: Grown in the lab from pig cells and synthetic-polymer matrix
TOMORROW: Regenerated from stem or precursor cells in the body
Organs
TODAY: Small slivers of liver tissue can be grown in the lab from one of the many types
of liver cells, but they are not yet ready for transplant
TOMORROW: Heart, liver, kidneys grown from stem cells in vitro and transplanted into
the body
Heart
TODAY: Bypasses, angioplasty and transplants to keep blood flowing to the heart muscle.
Doctors are beginning to use gene therapy to grow new blood vessels
TOMORROW: Growing functional patches of heart muscle or coaxing existing heart-
muscle cells to repair themselves

Ears
TODAY: Cochlear implants to replace damaged inner ear
TOMORROW: Implants that can be adjusted to pick up a wider range of frequencies at
longer distances
Breast
TODAY: Breast are reconstructed with saline sacs or with living tissue, using fat and
muscle from the back, buttocks or abdomen
TOMORROW: Breast may be grown in the lab from a patient's own fat cells and infused
back through keyhole slits in the chest
Eyes
TODAY: Laser surgery or implants to correct near- and farsightedness
TOMORROW: Permanent lens implants to correct vision while leaving the cornea intact

What New Things Are Going To Kill Me?
As we make headway against the old diseases, the ticking time bomb in the next century
will be the new microbes--natural and man-made
by RICHARD PRESTON

Remember in the movie Aliens when Hudson asked, "Is this gonna be a stand-up fight, sir,
or another bug hunt?" Well, the 21st century is going to be one hell of a bug hunt. There's
no doubt that eerie new infectious diseases will appear, and the struggles against some of
them will make the fight against the AIDS virus look like the opening battle of a war. Of
course, by then there will probably be a vaccine for AIDS, and the shot will cost a few
dollars or be given for free.

Today new viruses are coming out of nature and "discovering" the human species, while in
hospitals and in jungle clinics exceedingly powerful mutant bacteria are emerging that can't
be treated with antibiotics. In the past decade, at least 50 new viruses have appeared,
including Ebola Ivory Coast, Andes virus, hepatitis G, Fakeeh, Pirital, Whitewater Arroyo,
Hendra virus, Black Lagoon virus, Nipah and Oscar virus. This summer West Nile virus
showed up for the first time in the western hemisphere, when it was discovered in New
York City.

Viruses are moving into the human species because there are more of us all the time. From
a virus' point of view, we look like a free lunch that's getting bigger. My grandfather was
born in 1899, on the eve of a new century, when there were 1.5 billion people on earth. He
died in 1995, and by then there were almost 6 billion people. Thus in one lifetime the
population quadrupled, and it's heading for 9 or 10 billion. In nature, when populations
soar--and become densely packed--viral diseases tend to break out; then the population
drops. This is nature's population-control mechanism. It happens with rodents, insects and
even plants. There is no reason to think the human race is exempt from the laws of nature.

Giving these laws an extra push will be the rise of tropical megacities--huge, densely
packed cities in less developed nations. A U.N. study predicts that by the year 2015, there
will be 26 extremely big cities on the planet, and 22 of them will be in less developed
regions. The megacities will include Bombay (26 million people by 2015), Lagos (24
million), Dhaka (19 million) and Karachi (19 million). By 2030, almost 60% of the world's
people will live in urban areas. By then, some megacities could have 30 million or more
people. The population of California today is 35 million. Take all of California, cram those
people into one city, remove most doctors and medical care, take away basic sanitation and
hygiene, and what you have is a ticking biological time bomb. Now make eight or 10 such
bombs and plant them around the world.

Now wire the bombs together. People travel rapidly by airplane, carrying diseases with
them as they fly. The human species has become a biological Internet with fast connections.
The bionet will only get faster in the next century--that is, more people will travel by air
more often, increasing the speed at which diseases move. If a tropical megacity gets hit
with a new virus, New York City and Los Angeles will see it days or weeks later.

Then there are the biological weapons. The 20th century saw the creation of great weapons
based on the principles of nuclear physics; the 21st century will see great weapons based on
the knowledge of dna and the genetic code. During the 1980s, the Soviet Union used
rudimentary genetic engineering to create incurable strains of Black Death (bubonic
plague) that were resistant to drugs. This biotech Black Death was loaded into missile
warheads aimed at the U.S. As biotechnology becomes more supple and powerful and as
the genetic code of more organisms is unraveled, biologists will learn how to mix genes of
different bugs to create deadly, unnatural strains that can be turned into deadly, effective
weapons.

Scientists have found a type of bacterium that is virtually indestructible. It's called
Deinococcus radiodurans ("terrible berry that survives radiation"). This bug can live in a
blast of gamma rays that is the equivalent of thousands of lethal human doses--radiation so
strong it cracks glass. Scientists have found "dead" radiodurans spores in Antarctica that
have baked in UV light for 100 years. Yet when placed in a nutrient bath, the bug's dna
reassembles itself and proliferates. If radiodurans genes could be put into anthrax, they
might produce an anthrax that's virtually impossible to kill. From a bioweaponeer's point of
view, the future is bright.

Biological weapons are a disgrace to biology. Most biologists haven't wanted to talk or
even think about them. For years leading U.S. biologists were assuring themselves and the
public that bioweapons don't work and aren't anything to worry about. It was a naive dream
from the childhood of biology. The physicists lost their innocence when the first nuclear
bomb went off in 1945. The biologists will lose their innocence when the first biological
weapon spreads through the human species.

Yet the 20th century lived with the nuclear bomb, and there was great economic and
scientific progress and much human happiness. The same can be true in the next century.
Our tools for defending against new diseases are improving all the time. Vaccines are
getting better. Drugs to fight bugs are advancing. And new devices are coming that will
identify an infectious agent in seconds.

Our greatest weapon against the bugs will always be our mind. Dr. Jeffrey Koplan, director
of the Centers for Disease Control and Prevention, predicts that in the end, the fight will
come down to the same old sleuthing methods that disease hunters have always used to find
bugs and stop them. "Shoe-leather epidemiology" is what Koplan calls it. "You wear out
your shoes investigating an outbreak," he says. "You go around identifying the source of
the disease and figuring out how it's being spread, and then you remove the source. Even if
it's Vibram-soled epidemiology, we'll do it."

No matter how great our technology, we'll still have to go mano a mano with the microbes.
We may not completely win the 21st century bug hunt, but I am confident that we won't
lose it.

Richard Preston, best-selling author of The Hot Zone and The Cobra Event, is working on a
book about microscopic life forms.

Can We Make Garbage Disappear?
Through the magic of recycling and modern alchemy, we will move swiftly toward a world
without waste
by IVAN AMATO

Whoever said "waste not, want not" hasn't had much influence on 276 million Americans.
In 1997 we gave a collective heave-ho to more than 430 billion lbs. of garbage. That means
each man, woman and child tossed out an average of nearly 1,600 lbs. of banana peels,
Cheerios boxes, gum wrappers, Coke cans, ratty sofas, TIME magazines, car batteries,
disposable diapers, yard trimmings, junk mail, worn-out Nikes--plus whatever else goes
into your trash cans. An equivalent weight of water could fill 68,000 Olympic-size pools.

And that's just the relatively benign municipal solid waste. Each year American industries
belch, pump and dump more than 2.5 billion lbs. of really nasty stuff--like lead compounds,
chromium, ammonia and organic solvents--into the air, water and ground. That's about 400
Olympic poolfuls of toxic waste.

The really bad news is that most of the planet's 6 billion people are just beginning to follow
in the trash-filled footsteps of the U.S. and the rest of the developed world. "Either we need
to control ourselves or nature will," says Gary Liss of Loomis, Calif., a veteran of recycling
and solid-waste programs who advises clients aiming to reduce landfill deposits. As he sees
it, garbage--maybe every last pound of it--needs to become a vile thing of the past.

That may seem impossible, but it's not unprecedented. In nature, Liss points out, there is no
such thing as waste. What dies or is discarded from one part of an ecosystem nourishes
another part. Liss says humanity can emulate nature's garbage-free ways, but it will require
innovative technology and a big change in attitude.

We can get a glimpse of a less profligate future in Kalundborg, Denmark. There, an unusual
place called an "eco-industrial park" shows how much can be gained by recycling and
resource sharing. Within the park, a power company, a pharmaceuticals firm, a wallboard
producer and an oil refinery share in the production and use of steam, gas and cooling
water. Excess heat warms nearby homes and agricultural greenhouses. One company's
waste becomes another's resource. The power plant, for example, sells the sulfur dioxide it
scrubs from its smokestacks to the wallboard company, which uses the compound as a raw
material. Dozens of these eco-industrial parks are being developed all over the world.

Biotechnology is giving us additional tools to cope with waste--and turn it to our
advantage. We now have microbes that can take toxic substances in contaminated soil or
sludge--including organic solvents and industrial oils--and convert them into harmless by-
products. Soon we may be using genetic engineering to create what Reid Lifset, editor of
the Journal of Industrial Ecology, calls "designer waste streams." Consider all that stalk, or
stover, that every corn plant grows along with its kernels. Scientists at Monsanto and
Heartland Fiber are working toward engineering corn plants with the kind of fiber content
that paper companies would find attractive. So long as the genetic tinkering poses no
ecological threat, that approach could tap into a huge stream of agricultural waste, turning
some of it into an industrial ingredient.

In consumer markets, recycling has already spawned an army of alchemists. Jackets are
being made from discarded plastic bottles, briefcases from worn-out tires and belts from
beer-bottle caps. Even though the U.S. has barely begun to get serious about recycling,
about 25% of its 430 billion lbs. of municipal garbage is now salvaged, at least temporarily,
for some sort of second life.

Recycling will gain momentum as we develop materials that are easier to reuse. For
example, Jesse Ausubel, director of the Program for the Human Environment at Rockefeller
University, predicts that architects will increasingly rely on new types of foamed glass that
can be made unusually strong but still lightweight. Glass is a very recyclable material made
from sand, and it can be crushed back essentially into sand. Ausubel thinks we could see
foamed glass replace much of the concrete in today's buildings.

There are limits, of course, to how many lives you can give a pile of debris. In the long run,
we have to reduce the amount of material we use in the first place. Some progress is being
made--aluminum cans and plastic soda bottles have become thinner over the years, for
example--but more sweeping reductions will require a whole new kind of manufacturing
process.

That, says Lifset, is where nanotechnology plays a role. In this emerging field, which
employs just about every kind of scientific and engineering discipline, researchers expect to
create products by building them from scratch, atom by atom, molecule by molecule. This
bottom-up nanotechnological way of making things differs from the traditional drilling,
sawing, etching, milling and other fabrication methods that create so much waste along the
way.

Researchers have made headway toward molecule-size transistors and wires and even
batteries thousands of times as small as the period at the end of this sentence. These
laboratory feats make talk of sugar cube-size computers less speculative than it was a few
years ago. Says Lifset: "A lot of the consumer goods and industrial equipment could
become dramatically smaller when nanotechnology comes online. That, plus more efficient
recovery of the discarded goods, ought to translate into huge reductions in waste."

But technology is not enough. Just as critical are changes in attitudes and lifestyles. Brad
Allenby, AT&T's vice president for environment, safety and health, believes our move
from the industrial age to the information age could help enormously. At last count, he
says, 29% of AT&T's management force telecommuted, meaning less reliance on cars.
This, Allenby speculates, could be part of something bigger--a shift in our view of what
enhances our quality of life. Maybe we'll put less value on things that use lots of materials--
like three cars in the family driveway--and more on things that don't swallow up resources--
like telecommuting and surfing the Internet. Maybe downloading collections of music from
the Web will reduce the demand for CD cases. And while visions of a "paperless office"
have proved wildly wrong so far, we still have an opportunity to use computers to cut
consumption of paper and the trees it comes from.

Allenby thinks of such trends as "dematerialization." The deeper dematerialization goes in
society, the less stuff there will be to discard. What's more, as society becomes more
information-rich, the easier it will be to find uses for the diminishing amount of discarded
materials. Maybe, with the help of brokering services on the Internet, we can generalize the
principle that governs garage sales: One person's garbage is another's treasure. When that
attitude goes global, the human beings of the third millennium may be able to look back on
their former garbage-producing ways as a forgivable error of their youth as a species.

Ivan Amato, a free-lance magazine and radio reporter, is the author of Stuff: The Materials
the World Is Made Of.

What Will Be the Catch of the Day?
Human appetites have devastated the marine world, but at last we're searching for ways to
protect the oceans' bounty
by PETER BENCHLEY

If we continue, at our present rate, to strip-mine the sea of its living resources, 25 years
from now we'll be lucky to find a seafood menu that offers a rock sandwich with a side
order of kelp. Consider the swordfish: angler's prize, gourmet's delight, fisherman's
livelihood. In the mid-'60s, when I was in my mid-20s, I caught a swordfish off Long
Island. I wasn't trying to; it took bait meant for sharks. The fish was weirdly, atypically
lethargic. It didn't struggle much, didn't leap at all, just tugged for a while, then gave up.

It died quietly, and I watched (with some, but not enough, regret) its gleaming gun-metal
skin fade swiftly to death's dull gray. It wasn't a particularly big swordfish; it weighed only
247 lbs. A big swordfish would weigh more than half a ton.

Had I been able to know back then that what I had just caught was one of the last stragglers
of a vanishing species--that within 35 years a 247-lb. Atlantic broadbill swordfish would be
as rare as a tyrannosaur--I would have set it free, administered CPR or, if all attempts at
resuscitation had failed, I would at least have had the carcass of the mighty fish gilded and
sent to the Smithsonian.

Today the average Atlantic swordfish caught weighs 90 lbs., and the figure drops by a
pound or two every year. Because swordfish don't breed until the female is five years old
and weighs 150 lbs., we're killing and eating the teenagers before they can reproduce. And
though the U.S. is trying, at last, to lead a campaign to stop the slaughter, the effort is too
little, too late. Swordfish, like tuna and the other pelagic (open-ocean) fish, roam far from
American jurisdiction. There have been reliable reports of commercial fishermen in the
Mediterranean routinely landing swordfish weighing between 10 and 15 lbs.--the babies,
less than a year old.

Granted, swordfish are an especially vulnerable target, being prized as both game fish and
food fish. But they're hardly the only victims of the current global lunacy, of which the
motto seems to be: if it swims, hook it, stab it, poison it or blow it up.

Consider too the sharks: apex predators, lords of the food chain, inspiration for scary
stories. A few years ago, I dived off the coast of Costa Rica in a marine preserve where,
supposedly, all life was protected. Every day, looking down, I saw the sea bottom carpeted
with the corpses of whitetip reef sharks, grotesquely stripped of their fins by poachers who
had slashed them off to sell to the soup markets of Asia and had cast the living animals
back into the sea to die. Around the world, the numbers of some shark species have
declined as much as 80%. Some may already be practically extinct; the survivors in the
current generation may be too few to replace themselves.

Modern technology has given us the tools to extinguish entire fish populations, and because
man is a can-do critter, that's what we're doing. After climbing steadily for the past 50
years, the worldwide catch of seafood has begun to drop. We're fishing out the oceans, at
the same time that the need for seafood is soaring. Of the 6 billion of us on the planet, 1
billion rely on fish as their primary source of protein.

Daily, weapons of mass destruction are deployed in seas the world over: long lines
spanning up to 80 miles, dangling scores of thousands of baited hooks; enormous nets,
nearly invisible in water. These indiscriminate killers drown everything, including birds
and mammals, that takes the bait or blunders into the mesh. The unwanted--a quarter of
everything caught--is discarded, left to rot or, sometimes, taken aboard to be ground into
meal and fertilizer.

We know we have already wiped out--and by that I mean driven nearly to commercial
extinction or, in a few cases, the brink of biological extinction--more than 100 popular
species of food fish, including Nassau groupers, Chilean sea bass, orange roughy and cod.
What we don't know, what we'll never know, is how many undiscovered species have been
eradicated along the way. What creatures, great and small, might have contained genetic or
chemical secrets that could have saved lives or improved them, conquered diseases or
averted them?

The seas make up 95% of the planet's biosphere--the realm where all living things exist--
and we are stripping and poisoning it, depriving it of its ability to sustain life. Jacques-Yves
Cousteau once predicted that unless we--not the editorial or royal we but the universal we--
changed our ways and stopped treating the oceans as an infinite resource and a bottomless
dump, there would someday come a moment of no recovery. Overwhelmed at last, the
resilient seas would no longer be able to cleanse or restock themselves. From that moment
on, the oceans--and with them nearly all life on earth--would embark upon a slow,
irreversible descent into the darkness.

Is that it, then? Are we there? Have we slipped, silently and unaware, into our death spiral?

No one can know. Perhaps our grandchildren, or their grandchildren, will know. But I, for
one, decline to accept the end of the oceans, for to do so would be to accept the end of
humanity. I see signs that we are starting to alter our course--laboriously, yes, barely
perceptibly, like a supertanker beginning a slow turn in a heavy sea, but changing direction
nevertheless.

More and more nations are establishing marine reserves, where sea creatures of all sorts
and sizes can mate and bear their young free from the menace of man. Just as important,
funds are being found for enforcement of limits, restrictions and bans. Personnel are being
hired and trained; boats and planes are being deployed to monitor compliance.

As rivers are cleaned up, dams removed, pollutants flushed away, salmon are returning to
waters everywhere from California to Germany, where no salmon had been caught since
1947.

Aquaculture--fish farming--has established beachheads from Maine to the tropics, from the
South Pacific to the North Sea. Raising fish in enclosed pens is a complex and controversial
process that can pose enormous environmental problems, but if done right, it holds great
promise for feeding millions of people and providing vast numbers of jobs.

Where fishing in the wild has been banned outright, fish stocks are starting to come back.
Where "street-sweeper" trawls that devastate the seabed have been prohibited, nurseries and
habitats are beginning to recover.

Still, the days of abundance are gone. The image of cheap and wholesome seafood
available to everyone is fading into memory and myth. Already a single tuna can cost more
than most automobiles. Soon some oysters may be as rare and costly as pearls.

I often wish that back in the halcyon '60s, I had had the wit to release my swordfish. Its
kind will not come our way again.

Known for the novels and screenplays that have spawned such movies as Jaws and the TV
series Peter Benchley's Amazon, the author has narrated dozens of films on ocean
conservation.

Will We Keep Getting Fatter?
That's what we're programmed to do - unless we find some genes that will switch off fat
metabolism
by MICHAEL D. LEMONICK

If the past 2 million or 3 million years of human history are any guide, obesity is our
unfortunate but inevitable fate. That's not to say there's any special secret to weight control.
All it takes, as we've heard over and over, is a sensible diet and plenty of exercise.

But knowing and doing are two very different things, as hundreds of thousands of lapsed
weight watchers have learned to their despair. The trouble, according to one theory, is that
our best intentions about weight control go up against several million years of human
evolution. Our hunter-gatherer ancestors literally didn't know where their next meal was
coming from. So evolution favored those who craved energy-rich, fatty foods--and whose
metabolism stored excess calories against times of famine. Love handles, potbellies, thick
thighs are all part of Mother Nature's grand design.

From about 2.5 million B.C. to, say, 100 years ago, the system worked fine. Only a tiny
percentage of humans had unlimited access to food and no need to lift a finger on their own
behalf. What happened to them? Picture Henry VIII. But over the past century or so, most
Americans have been living like kings. Thanks to increasingly high-tech farming methods,
the fatty foods we crave have become plentiful and cheap in the U.S. and other developed
nations. At the same time (thanks again to technology), physical exertion is no longer a part
of most people's lives; most of us have to drag ourselves away from our computer or TV to
burn off the excess calories. The result is inevitable. In 1950 one-quarter of Americans
were classified as overweight; today half are. And despite the harangues of medical experts,
who constantly point out that obesity can lead to diabetes, heart disease and high blood
pressure, that's not likely to change. We'll keep getting fatter and fatter, with no real
prospect of reversing the trend.

Unless medical science provides a quick fix, that is. So far, the record on diet pills has been
pretty dismal. Amphetamines, which speeded metabolism and suppressed appetite, looked
promising in the 1950s and '60s but turned out to be physically harmful and powerfully
addictive. Drugs like fen-phen and Redux, which alter the brain's chemistry, had scary side
effects. Newer drugs like orlistat and food substitutes like olestra keep fat from entering the
body, but they cause serious bowel discomfort.

Researchers are learning more every day about how the body processes fat. One clue
involves the hormone leptin, which is pumped out by fat cells and signals lab mice, at least,
not to eat. Unfortunately, as reported last week in the Journal of the American Medical
Association, it doesn't seem to work in humans. Researchers are still trying to figure out
why not--and how to get around the problem. Another natural substance, called pro-
opiomelanocortin (pomc), seems to signal that it's time to stop eating. Mice treated with
pomc boosters shed 40% of their excess body weight in just two weeks. Again, it's not clear
that this will work in humans, but it's conceivable that pomc therapy--perhaps in shots--
could someday be standard.

Scientists are also focusing on the differences between two types of fat cells, known as
brown and white. The former, active in young mammals (including humans), convert fat
into heat rather than storing it. That's crucial in newborns, whose temperature-regulation
systems aren't fully formed. As we age, the brown cells become inactive and the white,
which convert dietary fat to body fat, take over. Several research teams have found that by
reactivating the brown cells in an adult animal with medication, they can burn off fat
dramatically. Now the doctors are looking for a genetic switch that can do the same for
humans.

What's becoming clear to scientists in the obesity business is that the body's energy-
processing system involves not one or two but a maze of metabolic pathways. pomc, leptin
and brown fat cells are part of the story. But nerve cells have also been implicated in weight
regulation, and it's not clear how these different pathways relate to one another. "Not a
month goes by," says Dr. Eric Ravussin, director of endocrine research at Eli Lilly,
"without publication of a new pathway that regulates feeding behavior, giving us new
potential targets."

Untangling this metabolic mess will probably take decades. But given the immense profits
waiting for whoever can invent a safe, effective weight-control substance, drug companies
aren't waiting. With the clues they have in hand, pharmaceutical firms are now
investigating about 60 compounds, most of them based on some of the 130 genes that have
so far been implicated in weight control.

So it seems likely that for a while at least we'll keep getting fatter. We can't undo evolution,
and we haven't found a way to fool Mother Nature--yet. But before the 21st century is half
over, with the body's fat-centric metabolism laid bare and the ability to manipulate genes
part of medicine's standard tool kit, the trend may finally stop. Chubbiness may not
disappear, but it could become optional. A future without Richard Simmons' commercials
would be a wonderful future indeed.
Reported by Alice Park/New York

Can I Live to be 125?
If you don't, your children may, thanks to scientists looking for longevity genes. But who
will want to live that long?
by JONATHAN WEINER

Walking and talking get more difficult for my mother every day, and when I phoned to tell
her the headline of this story, there was a long pause before she found the words to reply: "I
don't recommend it."

At 75, she is fighting one of the innumerable syndromes that elderly flesh is heir to. For
reasons that no neurologist can explain, many of her brain cells are filling with debris called
Lewy bodies. Her symptoms resemble those of Alzheimer's, and like Alzheimer's, the
condition is sometimes genetic. Do we have to grow old so sadly? Before we go, do we
have to lose most of the natural gifts that make life worth living? We are the first people in
human history for whom this is a primary concern. For every generation before ours, the
first concerns were Can I grow old? Will my baby reach a ripe old age? Please let us grow
older! Now the average life expectancy in the U.S. has advanced from 47 in 1900 to better
than 76 in 1999. During the next century, new biological discoveries should ensure that
even more of us will live to see old age and will encourage us to dream, in wild or wistful
moments, that we might not have to grow old at all.

Whenever I want to feel optimistic, I think about work in progress in the laboratory of
Seymour Benzer of the California Institute of Technology. Benzer made the first detailed
map of a gene's interior, and he and his student Ronald Konopka discovered the first so-
called clock gene, which ticks away inside virtually every living cell, helping tell our
bodies where we are in the daily sweep from morning to night. Now, at 77, Benzer is
searching through our genes for a sort of clock of clocks that tells us where we are in the
sweep from the cradle to the grave and decides how fast we age. Recently he discovered a
mutant fruit fly that lives more than 100 days, about one-third longer than the rest of the
madding crowd in a fly bottle. What makes the difference is a single gene, which Benzer
calls Methuselah.

If one gene can do that much for flies (or worms or mice--genetic engineering has created a
growing zoo of Methuselahs), then what can our genes do for us? Maybe there really is a
clock of clocks, and maybe, just maybe, 21st century biologists will figure out how to
twiddle and reset the hands. They might concoct Methuselah pills or inject Methuselah
genes into fertilized eggs and fool our mortal bodies into believing that we are forever
young. "Perhaps," Benzer muses, "aging can be better described not as a clock but as a
scenario, which we can hope to edit." If we died in old age at the same rate we die between
ages 10 and 15, then most of us in the U.S. would live 1,200 years. We would outdo the
first Methuselah, whose years were 969.

We are already doing better at preventive medicine and at repairing old bodies--dealing
with abdominal fat, atherosclerosis, blood pressure, blood sugar, cataracts and so on. U.S.
pharmaceutical companies have nearly two dozen Alzheimer's drugs in the works. In the
next century, molecular biologists are likely to tinker with more and more of our genetic
machinery, in what may be either mankind's worst folly or the most significant software
upgrade of the 21st century. (Caveat emptor, users of version 1.0!) Just last month,
biologists announced the discovery of mutations that accumulate in aging mitochondria,
which are our cells' batteries; maybe someday they will learn how to keep our batteries
from winding down. Scientists may also learn to repair our telomeres, the tiny ties at the
ends of each chromosome that help hold our genetic bundles together but fray with age.
Researchers may even learn to grow whole new hearts and livers from stem cells, a
prospect I find slightly dispiriting. Will we walk off the stage at last elaborately disguised, a
living prosthesis--false teeth, false eyes, false taste buds, false everything?

Of course, on this question of old age, science is still a baby. There are plenty of biologists
who believe that aging and death are as inevitable as taxes. No one really knows if human
longevity will come up against a fixed barrier somewhere or if, like the sound barrier, it is
there only to be broken. Some gerontologists say the limit of the average life-span is 85
years; others, 95, 100, 150 and beyond. No one understands the economic barriers either.
Ronald Lee, a demographer at the University of California, Berkeley, calculates that for
each year we add to the average life-span, the economy will have to grow 1% to pay for our
care.

After more than 50 years in the laboratory, Benzer has too much respect for life's
complexities to believe in quick cures or fountains of youth. He often works through the
night on his mutant Methuselah. He feels that aging should now be studied as a disease, and
he would love to spend his next career, he says, "unraveling the facts." But he hates to see
the study of longevity being overblown by the press. "I hope the hype will not result in the
same letdown as Nixon's all-out war on cancer." Even if there is a central clock, it may be
harder to control than cancer.

Chances are that my generation will consume all manner of antiaging drugs and nostrums--
antioxidants, growth hormone, vitamin D, garlic, red wine, melatonin, blueberries--and in
the end we'll still live only a little longer than our parents. Today in Japan a clothing
company is cashing in with "antistink" underwear for middle-aged men, who (according to
the company) begin to emit odors. But by the time we die, or shortly thereafter, the
expansion of youth and the postponement of old age may become one of the greatest
enterprises of the 21st century. "I see it as inevitable," says evolutionary biologist Michael
Rose, who breeds strains of long-lived flies in his laboratory at the University of California
at Irvine. "I'm confident that Benzer's work--and the worm people's and maybe my work--
will someday be used by a bunch of avaricious corporations who'll make billions of dollars
a la Microsoft by giving people what they've always wanted."

I wouldn't want to live as long as Methuselah, myself. But I would like to reach old age
alive and kicking. My hope is that the science of life will mature fast enough so that 30
years from now, when my sons begin to ask those eternal questions about growing old, I
can look at them and say, "I recommend it."

Jonathan Weiner is the Pulitzer-prizewinning author of The Beak of the Finch. His most
recent book is Time, Love, Memory

Will We Still Need to Have Sex?
Birds do it. Bees do it. But we may no longer have to do it - except for the fun of it
by MATT RIDLEY

First, the good news: people will still be trying to get each other into bed in 2025, though
one can only hope the pickup lines will be different by then. Now here's the revolutionary
(or should I say evolutionary) news: sex will seem a lot less necessary than it does today.
Having sex is too much fun for us to stop, but religious convictions aside, it will be more
for recreation than procreation. Many human beings, especially those who are rich, vain
and ambitious, will be using test tubes - not just to get around infertility and the lack of
suitable partners, but to clone themselves and tinker with their genes.

Lots of creatures already reproduce without sex: whiptail lizards, aphids, dandelions,
microscopic rotifers. And, of course, human beings. Since the birth of Louise Brown, the
first test-tube baby, in 1978, hundreds of thousands of human beings have been conceived
in laboratory glassware rather than in bed.

If human cloning becomes possible--and since the birth of a sheep called Dolly, few doubt
that it will be feasible to clone a person by 2025 - even the link between sex organs and
reproduction will be broken. You will then be able to take a cutting from your body and
grow a new person, as if you were a willow tree. And if it becomes possible to screen or
genetically engineer embryos to "improve" them, then in-vitro fertilization and cloning may
become the rule rather than the exception among those who can afford it.

In a sense, we have already divorced sex from reproduction. In the 1960s, the contraceptive
pill freed women to enjoy sex for its own sake. At the same time, greater tolerance of
homosexuality signaled society's acceptance of nonreproductive sex of another sort. These
changes are only continuations of a trend that started perhaps a million years ago. As
Richard Wrangham, professor of anthropology at Harvard, points out, "Most mammals lose
interest in sex outside a restricted mating period. For a female chimpanzee, copulation is
confined to the times when she has a pink swelling on her rump. Outside those lusty
periods, she would never think of trying to seduce a male, and he'd be horrified at the
thought. But humans have taken a much more persistent view of sexual possibilities,
probably since they first evolved as a species."

We share this interest in infertile, social sex with a few other species: dolphins, bonobo
apes and some birds. But even if sex is too good for human beings to give up, more and
more people will abandon it as a means of reproduction. Many people born from in-vitro
techniques are themselves infertile--they inherit the infertility from their genetic parents. So
infertility is bound to increase, and with it the demand for IVF. Add to this the demand
from gay men and women and from those with private eugenic motives - ranging from not
wanting to pass on inherited disease to wanting taller or smarter or prettier children - and
sexless reproduction is bound to spread.

In the modern world, you can even have sex and parenthood without suffering the bit in
between. Some Hollywood actresses may have satisfied the urge for mothering by electing
to adopt children rather than spoil their figures (as they see it) by childbearing. For people
as beautiful as this, the temptation to adopt a clone (reared in a surrogate womb) could one
day be irresistible.

Once cloning loses its stigma, the urge to tinker with the genes of offspring may not be far
behind. As Cambridge molecular biologist Graeme Mitchison says, "We can all be
beautiful--no baldness, no wimps with glasses, no knobby knees." Olivia Judson, author of
a forthcoming book called Dr. Tatiana's Sex Advice for All Creation, begs to differ: "If
there is such hostility to genetically modified soya, it doesn't bode well for genetically
modified people."

Human cloning and designer babies are probably not imminent. Even assuming that the
procedures are judged safe and efficient in farm animals, still a long way off, they will be
heavily discouraged, if not banned, by many governments for human beings.

It is worth noting, however, that in much of the biological world, cloning is old hat. There
are some species, such as those dandelions and whiptail lizards, that reproduce no other
way, and there are many, such as aphids and strawberries, that switch effortlessly between
sex and cloning. There are fierce arguments in biological circles about why such species
have not taken over the world: since they reproduce so efficiently and do not waste energy
producing futile creatures called males.

Two possible answers have been suggested. One is that males are necessary to combat
disease: without sexual reproduction, a clonal species is vulnerable to increasing parasitic
attack. The other theory holds that sex helps purge the species of genetic mutations by
shuffling the genes in each generation.

Neither of these explanations need trouble us. We are not going to use cloning to make the
whole of the next generation from one individual (though in the 1930s several eminent
geneticists thought that when IVF became available, lots of people would rush out to
choose prominent men such as Lenin as a father--which just goes to show how wrong
geneticists can be about the future). Also, genetic mutations accumulate much too slowly to
worry us.

And even if sex proved to be genetically unnecessary, it still wouldn't be a total waste of
energy. It is to sex, after all, that we owe most of the things we consider aesthetically
appealing in nature. If it were not for sex, there would be no blossoms and no birdsong. A
flower-filled meadow resounding with the dawn chorus of songbirds is actually a scene of
frenzied sexual competition. Geoffrey Miller, an evolutionary psychologist at University
College London, has pointed out that everything extravagant about human life, from poetry
to fast cars, is rooted in sexual one-upmanship.

"If women didn't exist, all the money in the world would have no meaning," said Aristotle
Onassis, who should know. Or, as Henry Kissinger put it, "power is the great aphrodisiac."
So where would humans - and human civilization - be without sex? Probably back with the
aphids and dandelions, I suspect, procreating effortlessly but building neither empires nor
cathedrals.

Matt Ridley is the author of "The Red Queen: Sex and the Evolution of Human Nature."
His newest book, "Genome: The Autobiography of a Species in 23 Chapters," is due out in
February

When Will We Cure Cancer?
Sooner rather than later, for a surprising number of malignancies. Others, we may just have
to live with
by SHANNON BROWN LEE

Talk about wishful thinking. One might as well ask if there will be a war that will end all
wars, or a pill that will make us all good looking. It is also a perfectly understandable
question, given that half a million Americans will die this year of a disorder that is often
discussed in terms that make it seem less like a disease than an implacable enemy. What
tuberculosis was to the 19th century, cancer is to the 20th: an insidious, malevolent force
that frightens people beyond all reason--far more than, say, diabetes or high blood pressure.

The problem is, the "cure" for cancer is not going to show up anytime soon--almost
certainly not in the next decade. In fact, there may never be a single cure, one drug that will
bring every cancer patient back to glowing good health, in part because every type of
cancer, from brain to breast to bowel, is different.

Now for the good news: during the next 10 years, doctors will be given tools for detecting
the earliest stages of many cancers--in some cases when they are only a few cells strong--
and suppressing them before they have a chance to progress to malignancy. Beyond that,
nobody can make predictions with any accuracy, but there is reason to hope that within the
next 25 years new drugs will be able to ameliorate most if not all cancers and maybe even
cure some of them. "We are in the midst of a complete and profound change in our
development of cancer treatments," says Richard Klausner, director of the National Cancer
Institute. The main upshot of this change is the sheer number of drugs in development--so
many that they threaten to swamp clinical researchers' capacity to test them all.

This welcome boom in cancer drugs owes its beginnings to one of this century's greatest
scientific insights: that cancer is caused not by depression or miasmas or sexual repression,
as people at various times have believed, but by faulty genes. Every tumor begins with just
one errant cell that has been unlucky enough to suffer at least two, but sometimes several,
genetic mutations. Those mutations prod the cell into replicating wildly, allowing it to
escape the control that genes normally maintain over the growth of new tissue.

This realization has transformed cancer, in little more than a decade, from an utterly
mysterious disease into a disorder whose molecular machinery is largely understood. Now
cancer biologists are in the midst of their second epiphany: the recognition that tumors
evolve, in Darwinian fashion, as each succeeding generation of cancer cells accumulates
genetic mutations. "Survival of the fittest applies to cancer cells," says Richard Schilsky,
associate dean for clinical research at the University of Chicago. "We now think of cancer
not as a disease but as a genetic process."

This new view has sparked innovations that will manage the process and keep it from
killing large numbers of people. "We are going to see a real shift from diagnosis and
treatment to prediction and prevention," declares California surgeon Susan Love, author of
Dr. Susan Love's Breast Book. Indeed, if all goes well with current clinical trials, women at
high risk for breast cancer will soon be able to be screened with a device that removes a
sample of breast cells through the nipple. If any cells show signs of the early mutations that
lead to cancer, doctors can suggest the drug tamoxifen, which is believed to reduce the risk
of breast cancer by suppressing precancerous cells. Drugs with fewer side effects that can
also prevent breast cancer are already in the pipeline.

Within five years, early detection will be available for many other types of cancer as well.
A stool sample will be all that is needed to search for colon-cancer cells on their way to
becoming tumors, and drugs like the new COX-2 inhibitors, which are improved versions
of pain killers, can prevent those precancerous cells from progressing. By the end of the
next decade, a simple blood test could alert doctors to a wide variety of cancer precursors.

Treatments for more advanced cancers, however, are farther over the horizon than anybody
can see. What is clear is that oncologists must take a page from aids treatment and use a
cocktail of drugs with very different modes of action to outsmart tumors that have already
begun to spread or metastasize.

That's because a tumor is made up of a hodgepodge of cells containing different genetic
mutations, each of which allows it to wreak a different brand of havoc. Some mutations
spur rapid growth; others prod nearby blood vessels into sprouting new capillaries; still
others send cancer cells out into the bloodstream, where they can seed new tumors. Within
10 years, predicts Robert Weinberg, a cancer biologist at the Whitehead Institute in
Cambridge, Mass., "we will analyze the mutant genes and then tailor-make a treatment
[for] that particular tumor."

One day there will be drugs to trip up a cell at each of the steps it takes on the path to
malignancy. A patient with lung cancer, say, might undergo gene therapy, breathing in
genetically altered cold viruses that don't cause infection but instead act as miniature
delivery vans carrying copies of the p53 gene. Good copies of this gene, which is mutated
in many cancers, can force some cancer cells to commit suicide. The effects of p53 could
be bolstered with antibodies that slow tumors by attaching to the surface of cancer cells and
gumming up their ability to take over the body's growth factors, the specialized proteins
that promote cell reproduction.

If a tumor has acquired the mutations for spreading, the doctor of the future may call on
matrix metaloproteinase inhibitors, a new kind of drug that can be taken orally to block the
enzymes a tumor uses to break down the cells of surrounding tissue and invade it. Vaccines
cobbled together from whole cancer cells or bits and pieces of those cells have been shown
to boost the body's immune system, helping it recognize and kill tumors on its own. "This
was all a dream five years ago," marvels John Minna, director of the Hamon Center for
Therapeutic Oncology Research at the University of Texas Southwestern Medical Center in
Dallas.

Also close to reality are the so-called antiangiogenic factors, relatively nontoxic compounds
that inhibit the growth of new capillaries. The idea behind this new class of drugs is that
tumors cannot grow bigger than a few hundred thousand cells--about the size of a
peppercorn--without growing their own blood-supply system. Researchers and patients, not
to mention the owners of stock in half a dozen biotech companies, are eagerly awaiting
results of clinical trials of antiangiogenic factors, which might be used in combination with
chemotherapy to knock down big tumors and then prevent any surviving tumors from
growing enough to do further damage.

The assumption behind many of these futuristic scenarios is an idea that most researchers
have begun to embrace but that many patients will undoubtedly find difficult to accept.
That is the prediction that certain cancers may require treatment for the rest of a patient's
long life. Coming out of a century that declared war on the disease, a century that felt the
only reasonable response to a tumor was to annihilate it, this may be hard to imagine. But
turning cancer into a controllable condition is not so different from treating high blood
pressure or diabetes. "I don't think curing cancer is the goal," says Ellen Stovall, executive
director of the National Coalition for Cancer Survivorship. Instead, she says, "it should be
helping people live as long and as well as they can."

No, we probably won't cure all forms of cancer in the 21st century. But we may very well
learn to live with them.

Science writer Shannon Brownlee's work has appeared in the New York Times, the New
Republic and the Atlantic Monthly

Will Robots Make House Calls?
Smart technology is revolutionizing health care, but not quite that fast
by CHRISTINE GORMAN

July 1, 2030. Like just about everyone she knows, Angela Jefferson, 36, wakes up to the
insistent patter of a HealthWatch Model 9000 alarm clock. "Today is Monday, and the time
is 6 a.m.," the little box chirps. Angela stares at its smooth, blue face long enough for the
embedded microlaser to scan the back of her eye. "Ocular pressure, blood pressure and
carbon-dioxide levels normal," the alarm clock reports. "But you are dehydrated. I'll signal
the refrigerator to fix you an electrolyte cocktail."

Angela pads off to the kitchen to pick up her drink, then heads to the bathroom. Before the
toilet finishes flushing, its sensors have completed a urinalysis and stool test. The
information is automatically patched through to a secure website that contains all her
medical records. If anything alarming, such as a spot of blood or some defective DNA,
shows up, both she and her physician will receive a health-care alert. By the same token, if
she ever falls ill while traveling, doctors can instantly punch up her records, using her
medical ID card to gain access.

Back in her bathroom, Angela turns to splay her fingers under the hand sanitizer. Next she
picks up her DentiGuard toothbrush, which checks for signs of gum disease and measures
her bone density while it brushes her teeth. During the course of her morning routine, a
total of 85 microscopic sensors, in everything from her hairbrush to the medicine cabinet,
will keep tabs on her health. Most days she doesn't even notice their presence.

If talking toilets and wired toothbrushes sound more like an Orwellian nightmare than a
dream come true, you might want to skip the rest of this story. But can you really afford not
to read on?

Predicting the future is easy; doing it accurately is a whole different matter. But current
trends suggest that the most dramatic changes in medical care in the next 20 or 30 years
will spring from a growing reliance on "smart" technology. Computer chips will become
ever faster, smaller and less expensive. Medical instruments and sensors will continue to
shrink. (One that already has is the formerly big, lumbering machine needed for radiation
treatment; today mobile electron accelerators are portable enough to be used during some
cancer operations, reducing the number of healthy cells that are damaged.)

We are witnessing the early days of a wired revolution in medicine. The Web has shattered
the physician's tightly held monopoly on information. Specialists are starting to provide
consultations via the Internet. Some doctors are experimenting with computer programs
that monitor how often an asthmatic refills a prescription, alerting them when the pattern
indicates that stronger medicines are needed to head off a more serious attack.

All these automated checkups would be a prescription for information gridlock if we
humans tried to track it all. But it is likely that we will leave the bulk of data collection and
processing to increasingly sophisticated computer programs.

So will robots be taking over for doctors? Probably not. Computers that today can describe
every disease known to man still can't navigate a hospital corridor. And even artificial
intelligence, or AI, diagnosis has its limitations. You're probably going to want a flesh-and-
blood practitioner--not just a computer--to diagnose your aches and pains for at least
another decade or two.

Still, computer technology can dramatically extend the physician's ability to treat diseases,
and nowhere is this more apparent than in the operating room. Already, information from
CAT scans is routinely used to reproduce detailed views of human anatomy in three
dimensions. Soon engineers will perfect the tools that allow surgeons to simulate an
operation realistically--down to the resistance of skin against scalpel.

But technology will never be a cure-all. Accidents and plagues won't disappear. The AIDS
epidemic is so entrenched in Africa and parts of Asia that it could overshadow much of the
21st century. Nor will everyone be able to afford the latest treatments for cancer or
Alzheimer's disease. For millions of people alive today, though, the ability to monitor their
health more closely and start treatments at the earliest stages of disease means that many
may live long enough to enjoy the blessings of the 22nd century.

TIME senior writer and health columnist Christine Gorman built her first robot out of
Legos when she was 10 years old

Will We Run Out of Gas?
No, we'll have plenty of carbon-based fuel to see us through the next century. That's the
problem
by MARK HERTSGAARD

The metaphorical answer to this question is more important than the literal, but the literal is
irresistibly short: No, unfortunately not. Humans will have at our disposal as much gasoline
as we can burn in the 21st century. Nor are we likely to run out of heating oil, coal or
natural gas, the other carbon-based fuels that have powered industrial civilization for 200
years.

Why won't we run out? And why is that unfortunate? After all, these fuels provide nearly
80% of the energy humans use to keep warm, to light buildings and run computers, to
power the cars that get us around, the tractors that plant food, the hospitals that serve our
sick. If these fuels were to vanish tomorrow, worldwide chaos would follow and humans
would die in the hundreds of millions.

So why not rejoice at having lots of fuel to burn? Let me try to answer that by telling you
about my friend Zhenbing.

I met Zhenbing in China in 1996, near the end of a six-year journey around the world to
write a book about humanity's environmental future. A 30-year-old economics professor
who was liked on sight by virtually everyone he met, Zhenbing was my interpreter during
five weeks of travel throughout China. A born storyteller, he often recalled his childhood in
a tiny village northwest of Beijing. Like most Chinese peasants of that era, Zhenbing's
parents were too poor to buy coal. Instead, in a climate like Boston's, where winter
temperatures often plunged below zero, they burned dried leaves to heat their mud hut.
Their home's inside walls were often white with frost from November to April.

In 1980, China's economic reforms began putting enough money in people's pockets to
enable even peasants like Zhenbing's parents to buy coal. Today coal supplies 73% of
China's energy, and there is enough beneath the country to last an additional 300 years at
current consumption rates. Plainly, that is good news in one respect. Burning coal has made
the Chinese people (somewhat) warm in winter for the first time in their history. But
multiply Zhenbing's story by China's huge population, and you understand why 9 of the
world's 10 most air-polluted cities are found in China and why nearly 1 of every 3 deaths
there is linked to the horrific condition of the air and water.

Equally alarming is what China's coal burning is doing to the planet as a whole. China has
become the world's second largest producer of the greenhouse gases that cause global
warming, and it will be No. 1 by 2020 if it triples coal consumption as planned. But the
U.S., the other environmental superpower, has no right to point a finger. Americans lead
the world in greenhouse-gas production, mainly because of their ever tightening addiction
to the car, the source of almost 40% of U.S. emissions.

Which returns us to gasoline and its source, petroleum. The earth's underground stores of
petroleum are not quite as ample as those of coal or natural gas, but there is enough to
supply humanity for many decades, even with rising population and living standards.
Crippling shortages may still occur, of course. But they will arise from skulduggery or
incompetence on the part of corporations or governments, not from any physical scarcity.

"Will we run out of gas?" a question we began asking during the oil shocks of the 1970s
is now the wrong question. The earth's supply of carbon-based fuels will last a long time.
But if humans burn anywhere near that much carbon, we'll burn up the planet, or at least
our place on it.

Change won't be easy. But how we respond will help answer the metaphorical meaning of
"Will we run out of gas?" That is, will our species fizzle out in the coming century, a victim
of its own appetites and lethargy? Or will we take action and earn a longer stay on this
beautiful planet?

The good news is, we know how to change course. Improving energy efficiency is the first
step and - surprise! - potentially a very profitable one, not just for consumers and
businesses but also for all of society. And better efficiency can buy us time to make a
global transition to solar power and other renewable energy.

China could use 50% less energy if it only installed more efficient electric lights, motors
and insulation, all technologies currently available on the world market. Americans could
trade in their notoriously gas-swilling SUVs for sporty new 80-m.p.g. hybrid-electric cars.
Better yet: hydrogen-powered fuel-cell cars, expected in showrooms by 2004. Since their
only exhaust is water vapor, fuel-cell cars produce neither smog nor global warming.

The best part is that we could make money by making peace with the planet. If
governments launched a program - call it a Global Green Deal - to environmentally retrofit
our civilization from top to bottom, they could create the biggest business enterprise of the
next 25 years, a huge source of jobs and profits.

Which is why I'm not entirely gloomy about our future. After all, what's more human than
pursuit of self-interest?

Mark Hertsgaard's most recent book is "Earth Odyssey: Around the World in Search of Our
Environmental Future"

Will Malthus Be Right?
His forecast was ahead of its time, but nature may still put a lid on humanity
by NILES ELDREDGE

Malthus was right. So read a car bumper sticker on a busy New Jersey highway the other
day, and it got me thinking about the Rev. Thomas Malthus, the English political economist
who gave the "dismal science" its nickname. His "Essay on the Principle of Population,"
published in 1798, predicted a gloomy future for humanity: our population would grow
until it reached the limits of our food supply, ensuring that poverty and famine would
persistently rear their ugly faces to the world.

The most casual cruise on the Internet shows how much debate Malthus still stirs today.
Basically, the Pollyannas of this world say that Malthus was wrong; the population has
continued to grow, economies remain robust - and famines in Biafra and Ethiopia are more
aberrations than signs of the future. Cassandras reply that Malthus was right, but techno-
fixes have postponed the day of reckoning. There are now 6 billion people on Earth. The
Pollyannas say the more the merrier; the Cassandras say that is already twice as many as
can be supported in middle-class comfort, and the world is running out of arable land and
fresh water. Despite a recent slowdown in the growth rate, the U.N. Population Division
expects the world population to reach 9.5 billion by the year 2100.

What's missing from the debate is an understanding of the changing relationship between
humanity and nature. For it is how humans fit into the natural world that will settle whether
Malthus was right or wrong. He was wrong in 1798. But if he had been writing 10,000
years earlier, before agriculture, he would have been right. And were his book being
published today, on the brink of the third millennium, he would be more right than wrong.
Let me explain.

Malthus cared about only one species: ours. And, ecologically speaking, ours is an unusual
species. With the invention of agriculture 10,000 years ago, we became the first species in
the 3.7 billion-year history of life not to be living as small populations off the natural fat of
the land. Taking food production into our own hands, we stepped outside the local
ecosystem. All but a few cultivated plants became weeds, and all but a few domesticated
herds, pets and game animals became pests and vermin.

In short, we declared open war on the very local ecosystems that had until then been our
home. As preagricultural hunter-gatherers, we humans held niches in ecosystems, and those
niches, resource-limited as they always were, had indeed kept our numbers down.
Estimates vary, but a figure of roughly 6 million people on Earth at the beginning of
agriculture is reasonable. By 1798 the population reached 900 million. Agriculture altered
how we related to the natural world and, in liberating us from the confines of the local
ecosystem, removed the Malthusian lid in one fell swoop.

So, when he wrote 200 years ago, Malthus was wrong. He did not see that nations are not
like ecosystems, that people could expand into new regions and, with the burgeoning
technology of the Industrial Revolution, become vastly more efficient at producing food
and wresting raw materials from Earth.

But something else is going on, and I think Malthus may have sensed it coming. As long
ago as 1679, Antoni van Leeuwenhoek (the Dutch inventor of the microscope) speculated
that the limit to the human population would be on the order of 13 billion - remarkably
close to many current estimates. For our position in the natural world is once again
undergoing a sea change. We are not the first nor are we the only species to spread around
the globe, but we are the first to do so as an integrated economic entity. Other species
maintain tenuous genetic connections, but no direct ecological connections, among their
far-flung members. We, in contrast, are exchanging more than $1 trillion of goods and
services among ourselves globally every day.

This means that in an economic - if not a political - sense, we have become a single,
enormous population. The system in which we are living, extracting our energy and other
supplies, is global: the totality of Earth's atmosphere, its waters, its soils and crust, and all
its living things. This is the sum total of all the world's local ecosystems - ecosystems we
have allowed to decay as we have chosen (quite successfully!) to live outside them.


V. Tecnologa
1. Cmo se alimentar al mundo?
Todo parece indicar que sern pldoras genticas modificadas las que cubrirn las escasez
de alimentos, para ello, los cientficos primero tendrn que superar los obstculos que hoy
se les presentan

2. Seguiremos manejando nuestros autos o ellos nos manejarn a nosotros?
A la vuelta de la esquina estn las avenidas con censores que permitirn que los autos se
guen por si solos hacia el destino determinado por el propietario

3. Abundarn criminales cibernticos?
SI, en la medida que cuenten con la tecnologa que encripta la informacin

4. Ser mejor el sexo ciberntico que el sexo real?
La realidad virtual desarrollar prendas con tecnologa que permitirn realizar virtualmente
conexiones con sujetos a distancia, solo quedara pendiente desarrollar la instrumentacin
necesaria para no sentirse sucio y para garantizar el romance.

5. Seguiremos dndole vuelta a las hojas?
Los textos llegaron para quedarse, probablemente en el futuro se combinen pginas de
papel con pginas de plstico y tinta digital, pero las editoriales pueden vivir tranquilas

6.Reemplazar la nariz-visin a la televisin?
NO, aunque los aromas de los platillos cocinados en el televisor llegarn a nosotros gracias
a los avances tecnolgicos. La televisin y la Web se fundirn en una sola cambiando
radicalmente la interactividad de la gente frente al monitor. Los anuncios sern hechos a la
medida del televidente con un control remoto que permitir la compra en lnea.

7. Seremos adictos a los video juegos?
Seremos adictos a los juegos pero no como los conocemos hoy, en el futuro la tecnologa
nos permitir practicar pasatiempos con amigos y familiares ubicados a gran distancia

8. Ser ms inteligente la computadora que la persona?
SI, en diez aos, el cerebro ser reproducido en una computadora que por su velocidad
parecer ser ms inteligente que el ser humano a quien por su parte se le colocarn chips en
el cerebro, los trminos humano y computadora ser sinnimos.

9. Se injertarn chips en el cerebro?
En un principio nicamente se har para fines mdicos y militares, pero poco a poco se ir
liberando la tecnologa y el ser humano ya no solo chateara tambin chipiara

10. Demandarn los robots que se respeten sus derechos?
Es muy probable, los avances tecnolgicos los colocarn a los niveles del ser humano, un
da se levantarn exigiendo que se les de un trato similar al de los humanos

11. Ser todo digital?
No solo digital sino inalmbrico. Las computadoras estarn instaladas por todos lasdos,
interconectadas para nuestra comodidad.

12. Podr un robot crear diamantes con gran velocidad y precisin?
En el futuro todo lo harn los robots, con gran precisin y velocidad gracias a que estarn
fabricados con manos y dedos ms delgados que los cabellos.

13. Qu reemplazar al silicn?
Tecnologas pticas y genticas basadas en molculas y quantum

14. Qu reemplazar al Internet?
Nada, por el contrario, casi todo se desarrollar paralelamente al Internet

15. Ser AOL propietaria de todo?
Hay el riesgo de que as sea, si continan convirtindose en el motor del desarrollo
ciberntico.

16. Est movindose demasiado rpido la tecnologa?
Demasiado rpido, por lo que solo algunos sern propietarios de la misma y eso podra
generar fricciones sociales y polticas en el futuro

17. Reemplazar la tecnologa Low a la High?
No, todo parece indicar que seguirn complementndose, mientras una desarrolla opciones
macro, la otra se encargar de pulir a nivel micro.

Will Frankenfood Feed the World?
Genetically modified food has met fierce opposition among well-fed Europeans, but it's the
poor and the hungry who need it most
BY BILL GATES

If you want to spark a heated debate at a dinner party, bring up the topic of genetically
modified foods. For many people, the concept of genetically altered, high-tech crop
production raises all kinds of environmental, health, safety and ethical questions.
Particularly in countries with long agrarian traditionsand vocal green lobbiesthe idea
seems against nature.

In fact, genetically modified foods are already very much a part of our lives. A third of the
corn and more than half the soybeans and cotton grown in the U.S. last year were the
product of biotechnology, according to the Department of Agriculture. More than 65
million acres of genetically modified crops will be planted in the U.S. this year. The genetic
genie is out of the bottle.

Yet there are clearly some very real issues that need to be resolved. Like any new product
entering the food chain, genetically modified foods must be subjected to rigorous testing. In
wealthy countries, the debate about biotech is tempered by the fact that we have a rich array
of foods to choose fromand a supply that far exceeds our needs. In developing countries
desperate to feed fast-growing and underfed populations, the issue is simpler and much
more urgent: Do the benefits of biotech outweigh the risks?

The statistics on population growth and hunger are disturbing. Last year the world's
population reached 6 billion. And by 2050, the U.N. estimates, it will probably near 9
billion. Almost all that growth will occur in developing countries. At the same time, the
world's available cultivable land per person is declining. Arable land has declined steadily
since 1960 and will decrease by half over the next 50 years, according to the International
Service for the Acquisition of Agri-Biotech Applications (ISAAA).

The U.N. estimates that nearly 800 million people around the world are undernourished.
The effects are devastating. About 400 million women of childbearing age are iron
deficient, which means their babies are exposed to various birth defects. As many as 100
million children suffer from vitamin A deficiency, a leading cause of blindness. Tens of
millions of people suffer from other major ailments and nutritional deficiencies caused by
lack of food.

How can biotech help? Biotechnologists have developed genetically modified rice that is
fortified with beta-carotenewhich the body converts into vitamin Aand additional iron,
and they are working on other kinds of nutritionally improved crops. Biotech can also
improve farming productivity in places where food shortages are caused by crop damage
attributable to pests, drought, poor soil and crop viruses, bacteria or fungi.

Damage caused by pests is incredible. The European corn borer, for example, destroys 40
million tons of the world's corn crop annually, about 7% of the total. Incorporating pest-
resistant genes into seeds can help restore the balance. In trials of pest-resistant cotton in
Africa, yields have increased significantly. So far, fears that genetically modified, pest-
resistant crops might kill good insects as well as bad appear unfounded.

Viruses often cause massive failure in staple crops in developing countries. Two years ago,
Africa lost more than half its cassava cropa key source of caloriesto the mosaic virus.
Genetically modified, virus-resistant crops can reduce that damage, as can drought-tolerant
seeds in regions where water shortages limit the amount of land under cultivation. Biotech
can also help solve the problem of soil that contains excess aluminum, which can damage
roots and cause many staple-crop failures. A gene that helps neutralize aluminum toxicity
in rice has been identified.

Many scientists believe biotech could raise overall crop productivity in developing
countries as much as 25% and help prevent the loss of those crops after they are harvested.

Yet for all that promise, biotech is far from being the whole answer. In developing
countries, lost crops are only one cause of hunger. Poverty plays the largest role. Today
more than 1 billion people around the globe live on less than $1 a day. Making genetically
modified crops available will not reduce hunger if farmers cannot afford to grow them or if
the local population cannot afford to buy the food those farmers produce.

Nor can biotech overcome the challenge of distributing food in developing countries. Taken
as a whole, the world produces enough food to feed everyonebut much of it is simply in
the wrong place. Especially in countries with undeveloped transport infrastructures,
geography restricts food availability as dramatically as genetics promises to improve it.

Biotech has its own "distribution" problems. Private-sector biotech companies in the rich
countries carry out much of the leading-edge research on genetically modified crops. Their
products are often too costly for poor farmers in the developing world, and many of those
products won't even reach the regions where they are most needed. Biotech firms have a
strong financial incentive to target rich markets first in order to help them rapidly recoup
the high costs of product development. But some of these companies are responding to the
needs of poor countries. A London-based company, for example, has announced that it will
share with developing countries technology needed to produce vitamin-enriched "golden
rice."

More and more biotech research is being carried out in developing countries. But to
increase the impact of genetic research on the food production of those countries, there is a
need for better collaboration between government agenciesboth local and in developed
countriesand private biotech firms. The isaaa, for example, is successfully partnering
with the U.S. Agency for International Development, local researchers and private biotech
companies to find and deliver biotech solutions for farmers in developing countries.

Will "Frankenfoods" feed the world? Biotech is not a panacea, but it does promise to
transform agriculture in many developing countries. If that promise is not fulfilled, the real
losers will be their people, who could suffer for years to come.

Will We Still Drive Our Cars (Or Will Our Cars Drive Us)?
Tighten your fan belts, drivers, because the automotive future has tomorrow written all over
it!
By BRUCE MCCALL

Not that today is any slouch: We already have onboard navigation systems and infrared
night vision and in-car satellite links and antiskid brakes and other electronic Samaritans
poised to take control when we screw up behind the wheel. Jaguar's adaptive cruise control,
available today, tracks the speed and position of the car in the lane ahead and automatically
adjusts the speed to keep a safe time interval.

Just around the corner, say all the big automakers, are smart highways embedded with
millions of tiny sensors and even smarter cars that are constantly aware of the traffic that is
flowing around them. Drivers in the not-too-distant future, they say, will navigate from
their home to the nearest freeway entrance ramp, at which point the collision-detection
computer will take over. Commuters will barrel down the highway at 120 m.p.h., with only
a few inches between their car and the next. But will they worry? No, they'll be checking
the NASDAQ and gabbing on their cell phone and scouring eBay until they reach their
programmed exitfinally ushering in the age of fully automated motoring first promised in
GM's spectacular "Futurama" exhibit at the 1939 New York World's Fair.

So much for the near horizon. What about the longer view? The human imagination may
simply not be sufficiently stretchable to visualize what cars and driving will have come to
25 years from now.

On the other hand, one can hope. This one hopes most of all for cars you can fast-forward
through those boring long drives on the interstate. Zzzap! There goes Nebraska! Better
yetsay it's 2025 and you're driving to Atlantic City or your mother-in-law's or a Jim
Nabors concert. Just flip a switch, and the Zeno's Paradox-based, drive-by-wire, fuzzy-logic
antidestination override feature cuts in to make sure you never get there.

What about a car horn that sprays sedatives into the air to knock out the idiot driver ahead
and get him out of your way? On-the-run oil changes should be within reach of automotive
science by the year 2025. And nuisance-sensing, high-voltage "stun cushions" to silence
backseat drivers for the duration. Smart windshield wipers that shred any leaflet, handbill
or parking ticket stuck under them? By 2025, a cinch.

I dream of a sensor that sounds a buzzer when a child's liquid content nears full, homes in
on the nearest public bathroom and, when you get there, brakes the car to a stop in one-fifth
of a second and opens the door even faster. They could call it Bladdermatic. I long for
interior window surfaces chemically coated to atomize dog-slobber on contact.

Let's hope that the subtle interaction between driver and passenger is more closely studied
and that friction-reducing measures result. For example, CD systems programmed to play
three, four, even five discs simultaneously, ending those music lovers' spats that so often
end in compromise, Celine Dion and listening misery for one and all.

I see no reason why, within the next quarter-century, the industry can't at last conquer a
driving menace as old as the automobile itself. Think of it: a built-in car-sickness detector,
able to identify the motion-queasy before they get into the car and start whining, and
instantly bar their entry. Surely it's not asking too much to expect cars of the future to
incorporate a dainty but palpable electric shock that reminds front-seat passengers to keep
their stupid feet off the dashboard?

My personal dream machine will come complete with a push-button Rattlefinder, to locate
and erase those mysterious internal clonking noisesa hidden cause of road rageand a
RadarScope Trez-R-Search, to burrow deep down into the seat cushions and retrieve coins,
tokens, pens and French fries our grandparents would have given up for lost.

Finally, I'd pay extra for an onboard supercomputer able to sense a parking spot 10 blocks
away and beam an electronic force field there to save it until I arrived. Whoops, this just in:
experts now forecast that by the year 2025 there won't be any parking spots. Drat! There
goes the age-old dream of a handheld, Star Wars-based, in-car parking-meter jamming
device.

Will Cybercriminals Run the World?
They will if they get their hands on the most powerful encryption technology, says the FBI.
The nerds who write the stuff beg to differ
BY BRUCE STERLING

They will if they get their hands on the most powerful encryption technology, says the FBI.
The nerds who write the stuff beg to differ Picture this scene from the near future:
organized crime gets hold of encryption technology so powerful even IRS supercomputers
can't crack it. An underground electronic economy emerges, invisible to U.S. tax code. The
Federal Government, unable to replenish its coffers, let alone fund a standing army, shrinks
until it wields about as much power as a local zoning board. Militias and gangs take over,
setting up checkpoints at state borders and demanding tribute of all who pass.

This scenario, in which crypto-wielding cybercriminals take over the world, has become a
standard plot device in turn-of-the-century science fiction. I've even used it once or twice.
But there is good news on this front. Running the world turns out to be surprisingly
challenging. It isn't something an evil mastermind can do just by hitting return on his
keyboard.

Encryption algorithmsthe mathematical rules by which secret codes are made and
brokenhave been at the center of a simmering spy-vs.-nerd war since the early 1990s.
The anti-encryption forces, which control the technology through laws originally passed to
regulate munitions, are led by a handful of spooky U.S. government agencies (such as the
FBI and the National Security Agency) with support from the White House that rises and
falls from one election cycle to the nextmore on that faction later.

The pro-encryption forces are the nerds; with a nod to the cyberpunk school of science-
fiction writers, they call themselves "cypherpunks." Though their numbers have always
been small, cypherpunks are brave, bold and highly motivated. And they have some
programming talent.

Being nerds, however, they are rather unworldly. They are similar in dress, zip code,
outlook and philosophy to the Berkeley free-speech activists of the early 1960s, except that
the cypherpunks have a bigger megaphone: the Internet. They can encrypt free speech and
software as well, so various uptight authority figures cannot stop their heroic data.

The cypherpunks, like the hippies, love to tilt against windmills. Their most glamorous
imaginary weapon is not free speech or free software or even free music. It is free money,
anonymous electronic cash and untraceable digital funds, free of all government oversight
and laundered over the Internet. Dotcom stocks have turned out to be surprisingly close to
this utopian vision. They are rather destabilizing.

Luckily for taxmen worldwide, however, money isn't "money" just because some hacker
says it is. We don't secretly print our own personal currency on pink paper at Kinko'snot
because it's impossible but because nobody would want it. If Alan Greenspan were a
masked Kleagle in a big white crypto hood, nobody would use dollars either.

Offshore "data havens" are another piece of classic cypherpunk vaporware. Here's the
pitch: just subvert one little Internet-hooked island country, say Tuvalu (.tv) or Tonga (.to),
let it pass a bunch of pirate-friendly laws, and you can store anything there that American
computer cops disapprove of. This might yet become a real business opportunity if the
Internet gets better policed.

But piracy only looks like the free-and-easy island life. It's actually hard work, because it
lacks efficient economies of scale. Once you start seriously churning out the product, you
quickly become very visible: warehouses, trucks, employee payrollsit all adds up. The
sweet charm of piracy is free, daring little "us" vs. big nasty "them." But any "us" that gets
large enough is automatically a "them." Bill Gates was once a hippie programmer, a college
dropout from Seattle. But a hippie with a billion dollars is no longer a hippie; he's a
billionaire. A hippie with $50 billion is considered a trust.

It's not that a cybercriminal world of conspiratorial smugglers, scofflaws, crooked banks
and tax evaders is impossible. Such countries already exist. It's just that they're not anyone's
idea of high-tech paradise. They are places like Bulgaria.

It's amazing how clunky and unproductive an economy becomes once its people despise
and subvert all its big institutions. Members of the Russian mafia don't shoot people
because they like to. They shoot them because in Russia these days a bullet is the quickest
way to get things accomplished. In such places, industrial consumerism just curls up and
vanishes. Black markets take its place; there are no more fast-food chains, so everybody
eats lunch out of the trunk of a car.

If everything on the Net was encrypted and belonged to small groups of with-it hipsters,
you would never find a bargain there. You would never find much of anything. You'd have
to wait till some hacker in the know was willing to give you the power handshake and turn
you on to the cool stuff. That might not cost very much, but it doesn't feel very free.

It has taken some anxious years of real-world experience for people to figure out that
crypto turned loose in cyberspace will not make the world blow up. Crypto's more or less
around and available now, and no, it's not an explosive munition. The threats were
overblown, much like Y2K. The rhetoric of all sides has been crazily provocative.

One expects that of fringe people in Berkeley. The U.S. government, on the other hand,
should have been fairer and more honest. The crypto issue, which is still smoldering and
poisoning the atmosphere, could have been settled sensibly long ago. We would have found
out that some small forms of crypto were useful and practical and that most of the visionary
stuff was utter hogwash. It would have shaken out in a welter of disillusionment, just as
Flower Power did. But we never got to that point, thanks mostly to the obstreperous
attitudes of the anti-crypto forces, who are basically spies.

The FBI does most of the upfront P.R. in the anti-crypto effort. The FBI doesn't like the
prospect of losing some wiretaps. That's just the FBI; it would say the same thing about
telepathy if it had it. The true secret mavens of crypto are at the nsa. Spy-code breakers
such as Alan Turing invented electronic computers in the first place, so the NSA has a
long-held hegemony here. The NSA sets the U.S. government's agenda on crypto, and it
will not fairly or openly debate this subject, ever.

The NSA would rather die than come in from the cold. Frankly, for the nsa, coming clean
probably means a swift death. Heaven only knows what vast, embarrassing skulduggery the
agency is up to under Fort Meade with its 40,000 mathematicians, but whatever it is, the
NSA certainly doesn't want to stop just because computers belong to everybody now. So
the U.S. government has never been honest about crypto and what it means to people in real
life. The whole issue is densely shrouded in billowing dry ice and grotesque X-Files hooey.
And that's why crypto is still very scary.

Will Cybersex Be Better Than Real Sex?
That depends on what lights your diodes. But judging by the quality of today's
"teledildonics," some things (hooray!) will never change
BY JOEL STEIN

There are two fields in which i'm anxious to see technology improve: medicine and hard-
core pornography. And since I'm not sick yet, I'm pretty focused on the porn thing. Luckily
I am not alone in my stunted vision of utopia. The desire for newer, better smut has long
been a major impetus behind technological progress: vcrs, dvds, Web development and I
believe X-ray glasses were all spurred by prurient desires.

The holy grail of pornography, though, has always been a machine that delivers a virtual
experience so real that it is indistinguishable from sex, other than the fact that it isn't at all
disappointing. Though prototypes have appeared in films (the Pleasure Organ in Barbarella,
the Orgasmatron in Sleeper, the fembots in Austin Powers), reality has remained painfully
elusive. In his 1991 book Virtual Reality, Howard Rheingold devoted an entire chapter to
"teledildonics," his not-so-clever name for devices that allow people to have sex without
being in the same area code. Rheingold imagines putting on a "diaphanous bodysuit,
something like a body stocking but with the kind of intimate snugness of a condom" and
having a virtual-reality sexperience over the Net. "You run your hand over your partner's
clavicle and, 6,000 miles away, an array of effectors are triggered, in just the right
sequence, at just the right frequency, to convey the touch exactly the way you wish it to be
conveyed."

Other than his fetish for Chinese clavicle, Rheingold is able to provide little that's useful in
the way of information or specs. And in the nine years since he published his personal
fantasies, there has been surprisingly little progress. Vivid, the world's largest producer of
adult entertainment, promised to deliver an interactive bodysuit last September but missed
its deadline. Sure, it had a $200,000 black neoprene suit with 36 electrodes stuck to the
chest, crotch and other special places, but the suit didn't look very appetizing. Nor did it do
anything. Vivid says it's waiting for fcc approval (interaction with pacemakers seems to be
a concern), but the real reason it is lying low on the sex suit is that Vivid is a proud
company, and it's not going to continue trumpeting a technology that is at best a long way
from happening.

But there are less proud pornographers. SafeSexPlus.com sells teledildonic devices that, it
turns out, look a lot like dildonic devices. The company promised that if I used these
gizmos in conjunction with their iFriends.net website, I could have a sexual experience
over the Net. I got SafeSexPlus to send me the equipment and figured I'd use it with my
girlfriend until I realized that was the dumbest idea I'd ever had. Thinking more clearly,
I decided this might be my one chance to get a porn star to have sex with me.

Wicked Pictures, a major adult-entertainment company, set me up on a cyberdate with one
of its actresses, Alexa Rae, star of Porn-o-matic 2000 and Say Aaah. I had never seen
Alexa's work, but I was assured she was a complete professional. SafeSexPlus.com sent
both of us toys, and we made an e-date.

I cannot fully describe to you the absolute repulsiveness of the sexual aid I was given
both because this is a family magazine and because the English language is not equipped
for the task. It was supposed to be a disembodied part of a woman, but it was more like part
of a really expensive Halloween outfit to which someone had haphazardly taped a lock of
Dweezil Zappa's hair. It felt like wet latex, smelled like wet latex and looked like
something Sigmund Freud might have used to make a very twisted point. I figured it was
designed for men without hands.

The device plugged into an electrical outlet and came with suction cups. This frightened me
even more than the Zappa hair until the people from SafeSexPlus explained that I was
supposed to stick the suction cups on my computer monitor once the "cyberdildonics box"
popped up. This box could be made darker or lighter by Alexa's controlling the box on her
screen and would make my latex gizmo vibrate at higher or lower frequencies depending on
how much light she decided to give me. I don't know what sexual experience was supposed
be replicated by a vibrating disembodied female body part, but I didn't want any part of it.

I was to have the same sort of control over Alexa's marital aid, which I assumed would be
somewhat less terrifying.

I assumed wrong. "It's a little scary," Alexa confessed as we talked on the phone and I
squinted at a live picture of her on a tiny, fuzzy box on my screen. I'm pretty sure she's
pretty and possibly blond. "It looks like it might hurt me. And it's making these ramming
noises. Like a jackhammer." I had never prided myself on being a gentle and considerate
lover; "ramming noises" and "like a jackhammer," however, were not phrases I was used to
hearing.

Alexa, ever the playful one, told me she'd take off her top if I could make her light box
change colors, so I got one of the tech guys at work to help me. Soon I could see her
yawning on my monitor. This, I thought, was getting to be more like the sexual experiences
I was accustomed to.

After 20 minutes, I think I got the color to change and the scary jackhammer noise to
increase. "I get turned on by anything sexual," Alexa purred as she took off her top and
jeans. "But not this."

We talked some more, and she told me she'd named herself after Billy Joel's daughter,
which I thought was in bad taste. Then I realized, looking down at the giant latex
pudendum jumping around my desk, that I wasn't in a position to comment on matters of
taste.

Still, in the name of science I concentrated on the image of Alexa on the screen and tried to
act sexy. "You are driving me crazy," I told her.

"Really?" she responded.

"No."

"Damn."

This was the high point of our encounter that and when I admitted I was incapable of
having phone sex. "Having good phone sex is just saying how you feel," she told me.

"I feel silly," I confessed.

"Not like that."

Eventually we decided to stop. "It has nothing to do with you," she said as she pulled her
jeans over her hips. "We're just asking each other technical questions, and it takes away the
sexiness." Virtual sex was indeed eerily like real sex for me.

Even if the technology vastly improves and if Alexa and I can one day consummate our
awkward phone conversation, I don't think teledildonics is the next generation of
pornography. Perhaps it might replace 900 numbers, with men paying to control the toys of
women they can see on their screens, but that's about it. Most people will still want to enjoy
their sexual fantasies alone, because even a programmable robot is going to be just an
annoying, unsuccessful intermediary not to mention a very difficult thing to hide in an
underwear drawer.

And as far as real sex goes, no high-tech device can ever replace a living, breathing person.
Because even if a machine felt real and looked real, it could never reproduce the real thrill
of sex: knowing that another being is freely giving herself to you and that at least for a few
minutes, you're not alone.

Now, why couldn't I come up with something like that when I had Alexa on the phone?

Will We Still Turn Pages?
Will society go the way of the book, the way of the screen or, via the miracle of e-ink, both
ways at once?
BY KEVIN KELLY

Washington and wall street are bedeviled by a specterthe specter of dotcom start-ups and
the rise of the nerd class. The fantastic wealth of the new economy and the renegade
attitude of its Netizens are deeply upsetting to the old order. The result is a cultural battle
illustrated most dramatically by the Microsoft case. Surprisingly, the outcome of this
conflict has a lot to say about whether we will still turn pages as we read.

On one side of this clash we have People of the Book. These are the good people who make
newspapers, magazines, the doctrines of law, the offices of regulation and the rules of
finance. They live by the book, by the authority derived from authors, and are centered at
the power points of New York City and Washington. The foundation of this culture is
ultimately housed in texts. They are all on the same page, so to speak.

On the other side (and on an axis that runs through Hollywood and Redmond, Wash.) we
have the People of the Screen. The People of the Screen tend to ignore the classic logic of
books; they prefer the dynamic flux of the screen. Movie screens, TV screens, computer
screens, Game Boy screens, telephone screens, pager screens and Day-Glo megapixel
screens we can only imagine today, plastered on every surface.

Screen culture is a world of constant flux, of endless sound bites, quick cuts and half-baked
ideas. It is a flow of gossip tidbits, news headlines and floating first impressions. Notions
don't stand alone but are massively interlinked to everything else; truth is not delivered by
authors and authorities but is assembled by the audience. Screen culture is fast, like a 30-
sec. movie trailer, and as liquid and open-ended as a website.

In this world, codeas in computer codeis more important than law, which is fixed in
texts. Code displayed on a screen is endlessly tweakable by users, while law is not. Yet
code can shape behavior as much as, if not more than, law.

On a screen, words move, meld into pictures, change color and perhaps even meaning.
Sometimes there are no words at all, only pictures or diagrams or glyphs that may be
deciphered into multiple meanings. This is terribly unnerving to any civilization based on
text logic.

The People of the Book fear that the page (and reading and writing) will die. Who will
adhere to the linear rationality found in books, new and old? Who will obey rules if books
of laws are diminished or replaced by lines of code? Who will turn nicely bound pages
when everything is available (almost free) on flickering screens? Perhaps only the rich will
read books on paper. Perhaps only a few will pay attention to the wisdom on their pages.

They need not fear. The People of the Screen (working at places like E Ink and Xerox) are
creating thin films of paper and plastic that hold digital ink. A piece of paper then becomes
a paper screen: one minute it has a poem on it, the next it has the weather. Bind a hundred
of these digital pages between covers, and you have a book that can change its content yet
still be read like a book. You turn the pages (a way to navigate through text that is hard to
improve), and when you are finished, you slap it into a holster to fill it with another text.
The ordinary reader might have a collection of several dozen leatherbound and different-
size book containers, ones that mold themselves to the reader's hands and habits over many
readings. "This story is formatted to be viewed on an oversize L book," it says on the first
screen, and so you pick your favorite oversize Moroccan book shell and sit to read in
luxurious ease.

Or digital ink can be printed together with the circuits for wireless transmission onto a
generously sized tabloid sheet tough as Tyvek. The tabloid sits on the table all day, as news
articles come and go. All the typographic conventions of a newspaper or magazine are
obeyed, but the paper doesn't come and go. It stays.

The page will not die. It is too handy and highly evolved. The same flat sheet of enhanced
paper is so nimble, in fact, that there is no reason why a movie could not be played on it as
well. Drama, music videos, great epics in full color all dance across this new page. The
eternal sheaf becomes both book and TV screen. Indeed the resolution will be fine enough
to read words floating in, around and through cinematic images. We see the beginnings of
that already on some websites where image and text intermingle. Is this a movie or an
essay? We don't know.

In the end we will have TV that we read and books that we watch. The People of the Book
will keep turning their pages, and the People of the Screen will keep clicking their screens.
All on the same piece of paper. Long live the page!

Will Smell-O-Vision Replace Television?
Don't hold your breath, or your nose. But broadband and new gadgets promise to widen the
pipeline to your set (and your wallet)
BY JAMES PONIEWOZIK

If you watch leather-lunged TV megachef Emeril Lagasse, you've probably heard him
lament the limitations of his medium: "Oooh! I can't wait till we get Smell-o-Vision so you
can smell this at home!"

Well, bad news, Em. Despite the old Bugs Bunny cartoon (in which a futuristic headline
proclaims SMELL-O-VISION REPLACES TELEVISION!), scented TV is still unlikely to
be in our parlors 20 years from now. (Emeril, alas, very likely will.) The reason is less
technical than economic. Smell can theoretically be digitized, and there are researchers
working to do exactly that. But Smell-o-Vision was tried years ago--with varying degrees
of technical sophistication--in movie theaters. Now it's gone. Shockingly, audiences have
failed to protest.

"Visionaries," says TV analyst Josh Bernoff of Forrester Research, "think about what is
possible without thinking about what will actually make good business." Say you were
descrying the future TV of 2000, 20 years ago. You might have predicted a remarkable,
crisp, high-tech TV display. You might have called it, say, "HDTV." And you would have
been right. Except HDTV is probably not in your living room. As with the De Lorean car,
the mere existence of a $5,000 or $10,000 TV set isn't sufficient to persuade consumers to
go into hock to get a sharper look at Dennis Franz's butt. Instead, people have stuck with
pretty much the same box they had in 1980, with less wood paneling but more channels and
more whatsits plugged in.

It is in those whatsits that the future of TV lies. Cable and telecommunications companies
(among them Time's parent, Time Warner) are racing to wire homes with high-speed data
connections, similar to today's cable in capacity except--a big except--that they allow two-
way communication and, above all (this being America), commerce. Meanwhile, our more
adventurous neighbors are starting to install digital TV "set-top" peripherals, from WebTV
to ReplayTV and TiVo, that allow them to surf the Web onscreen, interact with
programming, store TV shows on hard discs or even--horrifying to broadcasters--skip all
those commercials.

Things get really interesting when fast connections and smart set-top devices mate, turning
your TV into what is essentially a cheap computer (or your computer into a pricey TV). In
the past, broadcasting always involved watching what the networks offered when they
offered it. In the future, you'll tell your TV to capture your favorites--49ers games, Happy
Days reruns--whenever and wherever they're on, to watch on your own schedule. Or maybe
your TV will tell you what to watch. Using the same sort of software Amazon uses to
custom-recommend books, your TV will offer a "channel" for you and each family
member. This could be disastrous for big networks: you may no more know, or care,
whether your favorite show is on NBC or A&E Retro Future Archive than you know
whether your favorite movie was made by Fox or Paramount. The nets could lose brand
recognition--and big money.

When we can bypass the networks and easily zap ads, traditional commercials will become
less and less profitable. This means you'll have to pay for TV with either money or
information. Pay-per-view--for movies and sports--will become more common, as will
subscriptions. (Blockbuster has already made a deal with TiVo to beam movies to set-top
boxes.) When your TV becomes a massive video archive, you'll even pay for reruns.

Fear not, though: ads won't disappear. Product placements will multiply--including digitally
created insertions that can be changed with every rerun. And as your TV becomes more of
a communications device than a broadcasting device, you'll subsidize your entertainment
bill (as you will your phone bill, your Internet service bill and maybe your car payment) by
sharing valuable demographic information and agreeing to receive precision-targeted ads.

So take a whiff of tomorrow's TV. You're unwinding with, say, a lively cooking show
hosted by a hyperactive Louisianian. As you chat online with other home cooks and
download a recipe (with click-to-buy ingredients list, compiled in consultation with your e-
frigerator), your TV points out that your host's handsome, copper saute pan is 25% off at
Williams-Sonoma. Meanwhile, as the dish comes to a tantalizing simmer--at about 6:45
p.m., just when your family of four usually gets serious about ordering takeout--the TV
suggests clicking to order etouffee (and nonspicy chicken fingers for the kids) from a local
restaurant. It's a creepy intrusion, sure. But that etouffee looks mighty tasty.

So buck up, Emeril. We may never get Smell-o-Vision to the viewers. Sell-o-Vision,
though, is coming with a vengeance.

Will I Still Be Addicted To Video Games?
As technology gets more advanced, so does playtime. Prepare to plug in your bioport and
enter a universe that's better than real life
BY CHRIS TAYLOR
Jeff Bridges got zapped into it in "TRON." Keanu Reeves reached it by means of a red pill
in "The Matrix." In Neal Stephenson's novel "Snow Crash" --a cult classic in Silicon
Valley--our hero, Hiro Protagonist, goes there wearing goggles and a pair of virtual-reality
gloves. It's where I expect to be spending my evenings in the twilight of my life, without
ever leaving the comfort of my sofa. And--who knows?--maybe I'll meet you there.

What is it? It's a clean, well-lighted universe of one's own, built by computer but
experienced through as many senses as you can afford. It's a perfectly legal mind-blowing
experience to rival Timothy Leary's best trips, and it makes today's PlayStations seem as
primitive a pastime as bobbing for apples in a barrel.

You could call it a game, although that word will cease to have any real meaning when this
alternative world is complex enough to contain its own baseball leagues and its own
population of children playing hopscotch on the streets. The people we meet there will look
and feel almost as real as the ones we encounter during our waking lives. If you've chosen a
multiplayer universe, they may even be those same working stiffs, except that they will
probably inhabit designer bodies that are a lot more interesting than their own. Our quests,
our goals will be of our choosing and will almost certainly not involve corporate mission
statements or our boss's action-agenda items. Here, our presence is of primary importance,
and the universe truly does revolve around us.

Impossible? No, inevitable. Three important trends are on a collision course: the growing
power and wealth of the game industry (the 21st century's answer to Hollywood),
exponential advances in silicon and biotechnology, and a demographic shift that will put
purchasing power in the hands of a generation that was brought up on video games and sees
no point in putting them away. Already, the majority of people who play on PCs and video
consoles are over 18. Tens of billions of dollars are being spent by the likes of Microsoft
and Sony to ensure that they'll still be customers at 81. The odds are in the video-games
makers' favor; even Big Tobacco doesn't have a product this addictive.

How will we travel to our alternative universes? The most exciting possibility is to use
some form of biologically engineered computer wired directly into our heads--an exobrain
programmed to provide a better, more mathematically intricate imagination. In David
Cronenberg's recent movie "eXistenZ," squidgy pink packages called bioports plug directly
into special jacks at the base of players' spines. The upshot is rather like what happens to
your TV when you connect it to a VCR and press PLAY. Visual and aural information
from the real world is overridden; your bioport provides all the sensory stimuli you need.
Technically, it's just a question of getting the right hookups. If there's anything we already
know from playing games, it's that our brains eagerly adapt our physical responses to the
onscreen action. Next time your six-year-old plays Pokemon on his Game Boy or your
teenagers blast away at their pals on Quake, watch what happens to their breathing and
blink rate. One steadily increases; the other drops away to almost nothing. Their bodies are
getting ready to fight.

Not that there's anything unusual about this; play is one of our most natural activities. Like
dreaming, it helps us prepare for situations we might be forced to face in real life. (The U.S.
Army already uses a Quake-style battle simulator to increase the weapons-firing
responsiveness of its troops.) Imagine how you--or your business--could use a universe that
mimicked the real one down to the slightest detail. Worried about asking your boss for a
raise? Plug in the bioport, and see how a character like him might react. Want to see how
well you could defend your home against an armed intruder? Enter your specs and have a
go. Wary of giving your teenager the car keys? Let him drive around a virtual version of
your hometown first.

"More and more, games are going to be about the player telling a story," says Will Wright,
creator of this year's hottest PC game, The Sims. "It's up to us, the designers, to give them
rich, open-ended environments." The Sims is very much in that do-anything vein; your aim
is to micromanage the happiness of a suburban household, right down to the color of the
roof tiles and the frequency of the bathroom breaks. In the future you may simply drop
yourself into the Sims' house and hang out with them for hours at a time--a life away from
life, a home away from home. Urban dwellers will escape their cramped confines by
building vast Sims mansions in the cybercountryside; rural folk will get over their city envy
by constructing a city of their own, brick by virtual brick.

What will make or break this scenario is the level of artificial intelligence found in the Sims
themselves. After all, our brains were built to enjoy levels of social interaction higher than
simply killing our opponents. We want to talk to them, to gossip, scheme and plot. Building
computer characters that can pass our personal Turing tests is no easy task--but if anyone
has the money and the motivation to fund neural network research, it's the game industry.
"True artificial intelligence will come out of games first," says veteran designer Peter
Molyneux, who should know. His latest epic, titled Black and White--to be released this
fall--features creatures so complex they can go out and build websites of their own free
will.

Personally, I'm planning to get my bioport operation just as soon as someone designs a total
sensory version of the classic empire-building game Civilization. The task of supervising
the entire span of human development--from cracking the whip at the construction of the
pyramids to spearheading the colonization of outer space--should be enough to keep me
occupied long past my 81st birthday. As Molyneux puts it, "What we're talking about is the
ultimate drug. If I can build a world where you can smell a rose and be a god, would you
ever want to come back?" Not me. In my dotage, I'll happily resign myself to the 21st
century equivalent of a crack den with a pink squidgy thing strapped to my spine. Move
over, Jeff, Keanu and Hiro--I'm coming in.

Will My PC Be Smarter Than I Am?
Once we learn how to map the brain and make computers fast enough to simulate it, all bets
are off
BY RAY KURZWEIL

For starters, you should realize that as soon as a computer achieves a level of intelligence
comparable to human intelligence, it will necessarily soar past it. A key advantage of
nonbiological intelligence is that machines can easily share their knowledge. If I learn
French, I can't readily download that learning to you. My knowledge, skills and memories
are embedded in a vast pattern of neurotransmitter concentrations and interneuronal
connections and cannot be quickly accessed or transmitted.

But when we construct the nonbiological equivalents of human neuron clusters, we will
almost certainly include built-in, quick-downloading ports. When one computer learns a
skill or gains an insight, it will be able to share that wisdom immediately with billions of
other machines.

Let's consider the requirements for a computer to exhibit human-level intelligence, by
which I include all the diverse and subtle ways in which humans are intelligent--including
musical and artistic aptitude, creativity, the ability to physically move through the world
and even to respond to emotion. A necessary (but not sufficient) condition is the requisite
processing power, which I estimate at about 20 million billion calculations per sec. (we
have on the order of 100 billion neurons, each with some 1,000 connections to other
neurons, with each connection capable of performing about 200 calculations per sec.). As
Moore's law reaches its limit and computing power no longer doubles roughly every 12 to
18 months (by my reckoning, around 2019), conventional silicon chips may not be able to
deliver that kind of performance. But each time one computing technology has reached its
limit, a new approach has stepped in to continue exponential growth (see "What Will
Replace Silicon?" in this issue). Nanotubes, for example, which are already functioning in
laboratories, could be fashioned into three-dimensional circuits made of hexagonal arrays
of carbon atoms. One cubic inch of nanotube circuitry would be 1 million times more
powerful than the human brain, at least in raw processing power.

More important, however, is the software of intelligence. The most compelling scenario for
mastering that software is to tap into the blueprint of the best example we can get our hands
on: the brain. There is no reason why we cannot reverse-engineer the human brain and copy
its design. We can peer inside someone's brain today with noninvasive scanners, which are
increasing their resolution with each new generation. To capture the salient neural details of
the human brain, the most practical approach would be to scan it from inside. By 2030,
"nanobot" technology should be available for brain scanning. Nanobots are robots that are
the size of human blood cells or even smaller (see "Will Tiny Robots Build Diamonds One
Atom at a Time?"). Billions of them could travel through every brain capillary and scan
neural details up close. Using high-speed wireless connections, the nanobots would
communicate with one another and with other computers that are compiling the brain-scan
database.

Armed with this information, we can design biologically inspired re-creations of the
methods used by the human brain. After the algorithms of a region are understood, they can
be refined and extended before being implemented in synthetic neural equivalents. For one
thing, they can be run on computational systems that are more than 10 million times faster
than the electrochemical processes used in the brain. We can also throw in the methods for
building intelligent machines that we already understand. The computationally relevant
aspects of individual neurons and neural structures are complicated but not beyond our
ability to model accurately. Scientists at several laboratories around the world have built
integrated circuits that match the digital and analog information-processing characteristics
of biological neurons, including clusters of hundreds of neurons.

By the third decade of the 21st century, we will be in a position to create highly detailed
maps of the pertinent features of neurons, neural connections and synapses in the human
brain--including all the neural details that play a role in the behavior and functionality of
the brain--and to re-create these designs in suitably advanced neural computers. By that
time, computers will greatly exceed the basic computational power of the human brain. The
result will be machines that combine the complex and rich skills of humans with the speed,
accuracy and knowledge-sharing ability that machines excel in.

How will we apply technology that is more intelligent than its creators? One might be
tempted to respond, "Carefully!" But let's take a look at some examples.

The same nanobots that will scan our brains will also be able to expand our thinking and
our experiences. Nanobot technology will provide fully immersive, totally convincing
virtual reality. By taking up positions in close physical proximity to every interneuronal
connection coming from all our sense organs (e.g., eyes, ears, skin), the nanobots can
suppress all the inputs coming from the real senses and replace them with the signals that
would be appropriate for a virtual environment. By 2030, "going to a website" will mean
entering a virtual-reality environment. The implant will generate the streams of sensory
input that would otherwise come from our real senses, thus creating an all-encompassing
virtual environment that will respond to the behavior of our own virtual body (and those of
others) in that environment.

This technology will enable us to have virtual-reality experiences with other people--or
simulated people--without requiring any equipment not already in our heads. Further, this
virtual reality will not be the crude experience one can sample in today's arcade games. It
will be as realistic and detailed as real reality. Instead of phoning a friend, you can meet in
a virtual cafe in Paris or take a walk on a virtual Mediterranean beach, and it will seem very
real. People will be able to have any type of experience with anyone--business, social,
romantic, sexual--without having to be in the same place.

Nanobot technology will be able to expand our minds in virtually any imaginable way. Our
brains today are relatively fixed in design. Although we do add patterns of interneuronal
connections and neurotransmitter concentrations as a normal part of the learning process,
the current overall capacity of the human brain is highly constrained, restricted to a mere
100 trillion connections. Since the nanobots will be communicating with one another over a
wireless local area network, they can create any set of neural connections, break existing
connections (by suppressing neural firing) and create new hybrid (i.e., combined biological
and nonbiological) networks, as well as add powerful new forms of nonbiological
intelligence. Brain implants based on distributed intelligent nanobots will massively expand
our memory and otherwise vastly improve all our sensory, pattern-recognition and
cognitive abilities.

We are already using surgically installed neural implants for conditions such as deafness
and Parkinson's disease. In 2030 nanobots could be introduced without surgery, essentially
by just injecting or even swallowing them. They could also be directed to leave, so the
process should be easily reversible. They will be programmable, in that they will be able to
provide virtual reality one minute and a variety of brain extensions the next. They will be
able to change their configuration and alter their software. Perhaps most important, they
will be massively distributed and therefore can take up billions or trillions of positions
throughout the brain.

So will computers be smarter than humans? It depends on what you consider to be a
computer and what you consider to be human. By the second half of the 21st century, there
will be no clear distinction between the two. On the one hand, we will have biological
brains greatly expanded through distributed nanobot-based implants. On the other, we will
have fully nonbiological brains that are copies of human brains but vastly extended. And
we will have a myriad of other varieties of intimate connection between human thinking
and the technology it has fostered.

Although some contemporary observers consider the prospect of merging with our
technology disconcerting, I believe that by the time we get there, most of us will find it
very natural to expand in this way our experiences, our minds and our possibilities.

Will we plug chips into our brain?
The writer who coined the word cyberspace contemplates a future stranger than his science
fiction
BY WILLIAM GIBSON

Maybe. But only once or twice, and probably not for very long.

With their sharp black suits and their surgically implanted silicon chips, the cyberpunk hard
guys of '80s science fiction (including the characters in my early novels and short stories)
already have a certain nostalgic romance about them. These information highwaymen were
so heroically attuned to the new technology that they laid themselves open to its very
cutting edge. They became it; they took it within themselves.

Meanwhile, in case you somehow haven't noticed, we are all becoming it; we seem to have
no choice but to take it within ourselves.

In hindsight, the most memorable images of science fiction often have more to do with our
anxieties in the past (that is to say, the writer's present) than with those singular and
ongoing scenarios that make up our life as a species our real future, our ongoing present.

Many of us, even today, or most particularly today, must feel as though we already have
silicon chips embedded in our brains. Some of us, certainly, are not entirely happy with that
feeling. Some of us must wish that ubiquitous computation would simply go away and
leave us alone. But that seems increasingly unlikely.

That does not, however, mean that we will one day, as a species, submit to the indignity of
the chip if only because the chip is likely to shortly be as quaint an object as the vacuum
tube or the slide rule.

From the viewpoint of bioengineering, a silicon chip is a large and rather complex shard of
glass. Inserting a silicon chip into the human brain involves a certain irreducible inelegance
of scale. It's scarcely more elegant, relatively, than inserting a steam engine into the same
tissue. It may be technically possible, but why should we even want to attempt such a
thing?

I suspect that mainstream medicine and the military will both find reasons for attempting
such a thing, at least in the short run, and that medicine's reasons may at least serve to
counter some disability, acquired or inherited. If I were to lose my eyes, I would quite
eagerly submit to some sort of surgery that promised a video link to the optic nerves. (And
once there, why not insist on full-channel cable and a Web browser?) The military's reasons
for chip insertion would probably have something to do with what I suspect is the
increasingly archaic job description of "fighter pilot," or with some other aspect of
telepresent combat, in which weapons in the field are remotely controlled by distant
operators. At least there's still a certain macho frisson to be had in the idea of embedding a
tactical shard of glass in your head, and crazier things, really, have been done in the name
of king and country.

But if we do it at all, I doubt we'll be doing it for very long, as various models of biological
and nanomolecular computing are looming rapidly in view. Rather than plug a piece of
hardware into our gray matter, how much more elegant to extract some brain cells, plop
them into a Petri dish and graft on various sorts of gelatinous computing goo. Slug it all
back into the skull and watch it run on blood sugar, the way a human brain's supposed to.
Get all the functions and features you want, without that clunky-junky 20th century
hardware thing. You really don't need complicated glass to crunch numbers, and computing
goo probably won't be all that difficult to build. (The trickier aspect here may be turning
data into something brain cells can understand. If you knew how to get brain cells to
manage pull-down menus, you'd probably know everything you needed to know about
brain cells.)

Our hardware, I think, is likely to turn into something like us a lot faster than we are likely
to turn into something like our hardware. Our hardware is evolving at the speed of light,
while we are still the product, for the most part, of unskilled labor.

But there is another argument against the need to implant computing devices, be they glass
or goo. It's a very simple one, so simple that some have difficulty grasping it. It has to do
with a certain archaic distinction we still tend to make, a distinction between computing
and "the world." Between, if you like, the virtual and the real.

I very much doubt that our grandchildren will understand the distinction between that
which is a computer and that which isn't.

Or to put it another way, they will not know "computers" as a distinct category of object or
function. This, I think, is the logical outcome of genuinely ubiquitous computing, of the
fully wired world. The wired world will consist, in effect, of a single unbroken interface.
The idea of a device that "only" computes will perhaps be the ultimate archaism in a world
in which the fridge or the toothbrush is potentially as smart as any other object, including
you, a world in which intelligent objects communicate, routinely and constantly, with one
another and with us.

In this world, there may be no need for the physical augmentation of the human brain, as
the most significant, and quite unthinkably powerful, augmentation will have taken place
beyond geographic boundaries, via distributed processing. You won't need smart goo in
your brain, because your fridge and your toothbrush will be very smart indeed, enormously
smart, and they will be there for you, constantly and always.

So it won't, I don't think, be a matter of computers crawling buglike into the most intimate
chasms of our being, but of humanity crawling buglike out into the mingling light and
shadow of the presence of that which we will have created, which we are creating now, and
which seems to me to be in the process of re creating us.

Will Robots Rise Up and Demand Their Rights?
BY RODNEY BROOKS

Could a robot ever really want anything? The hard-core reductionists among us, myself
included, think that in principle this must be possible. Humans, after all, are machines made
up of organic molecules whose interactions can all be aped (we think) by sufficiently
powerful computers.

So how well are we doing at creating living, wanting robots? We are making progress, both
from the bottom up and from the top down. At one end, researchers are taking apart the
simplest living bacteria mycoplasmas whose genome can be stored in less than a
quarter of a megabyte, to better understand the process of life at the molecular level.
Meanwhile, computer programs that reproduce and evolve are starting to exhibit behaviors
we expect from simple living creatures, such as interaction with complex environments and
sexual reproduction. Artificial life forms that "live" inside computers have evolved to the
point where they can chase prey, evade predators and compete for limited resources.

At the other end, there has been a renaissance of interest in robots that walk like humans,
talk like humans, detect human faces and have the beginnings of human social responses.

Of course, the robots we are building now don't have the physical dexterity of a year-old
baby and dont yet recognize that they are the same robot today that they were yesterday.
At best they are zombies stuck in the present, surrounded by a sea of unrecognizable shapes
and colors.

Still, the direction is clear: robots are becoming more humanlike. Barring a complete failure
of the mechanistic view of life, these endeavors will eventually lead to robots to which we
will want to extend the same inalienable rights that humans enjoy.

We should not forget, however, that we will also want robots to man the factories and do
our chores. We do not have ethical concerns about our refrigerators working seven days a
week without a break or even a kind word. As we develop robots for the home, hospitals
and just about everywhere else, we will want them to be similarly free of ethical issues.

So expect to see multiple species of robots appearing over the next few decades. There will
be those that will be our appliances as well as those toward which we will feel more and
more empathy. One day the latter will call our bluff and ask us to treat them as well (or as
badly) as we treat one another. If they are unlucky, their prayers just may be answered.

Will Everything Be Digital?
What happens in a world in which atoms are replaced by bits? In which everything that was
wired becomes wireless, and vice versa?
BY NICHOLAS NEGROPONTE

The important thing to remember is that bits are bits. In the digital world there are no
movies or magazines or pieces of music. There are just 1s and 0s, for which we did not
even have a name until 1946 when Princeton statistician John Tukey concatenated the
words binary and digit into the term bit.

For the next 25 years, bits were of interest only to a few specialized members of the
scientific community. But of late, bits have become important to everybody, because we
can represent anything as bits--anything. In the not-too-distant future, we'll be able to
describe the human body with bits and try out new drugs on these models rather than on
living beings.

Books and magazines and newspapers are not the meaningful element. What matters is
words. And words are not going anywhere; they are one of the most powerful and efficient
forms of human communication. A few words--i.e., a few bits--can create religions, can
make war or peace. Those words when presented to the eye (vs. the ear) are presented as
text. In the past we could render text only by printing it on paper, carving it in stone,
writing it with smoke.

Today we can do something new. We can reduce the text to bits (which we cannot see or
hear), take this new representation and store it, manipulate it or transmit it, and then later
render it on a computer display or a piece of paper. The same is true of music, movies, still
photographs. While this is widely recognized, few people have a sense of the quantity of
bits needed to achieve one representation vs. another. For example, when you read a book,
you consume (if you read as fast as I do) about 3 million bits an hour. When you look at
television, you consume 3 million a second. Obviously, all bits are not created equal.


Bandwidth is the ability to move bits. Broadband is the ability to move a lot of bits per
second. Though everybody seems to do it, likening bandwidth to the diameter of a pipe is
misleading, because our consumption of bits is not analogous to drinking from a garden or
fire hose. We don't necessarily consume bits in a continuous fashion (like water), and even
if we did, that does not perforce mean our computers have to receive them that way.

One of the most profound changes afforded by the digital world is the ability to be
asynchronous, in the smallest and largest time scales. In the smallest sense, this allows us to
use efficiently our channels of communications; for example, interleaving people's
conversations--packetizing them--so that many people share the same channel without
being aware that they are. In the larger sense, we can expand, contract and shift our
personal time in new ways, leaving and receiving messages at mutual convenience. On a
yet larger scale, social behavior will also become more asynchronous, with all of us moving
in much less lockstep rhythm and with more personal cadence than we do today. Our great-
great-grandchildren will find it very odd that their ancestors commuted in traffic or huddled
around a TV set at a particular time.

But in this new world, more bandwidth is not necessarily good, or even what we want.
And, when we do want it, it is not necessarily in order to sit in front of a device and
consume a few billion bits an hour. It is more likely that we will want a million bits in a
fraction of a second followed by a pause. Our future consumption of bits will be very
conversational and bursty.

Moreover, the dominant user of the Net in the future will not be people at all. It will be
machines talking to one another in ways we cannot imagine. For them, trickle charging
information or blasting at a billion bits a second are options not directly meaningful to
people. Increasingly, these bits will arrive wirelessly.

Moreover, the dominant user of the Net in the future will not be people at all. It will be
machines talking to one another in ways we cannot imagine. For them, trickle charging
information or blasting at a billion bits a second are options not directly meaningful to
people. Increasingly, these bits will arrive wirelessly.


Plugs are the past. The need to be tethered is disappearing for two reasons: better battery
technologies (and less power-hungry devices) and improved use of radio frequencies, so-
called RFs. Eventually, everything electric will talk with everything else electric, using
very fine-grained, wireless communications. Ultimately, all long-distance traffic will be
fiber and all short-distance traffic will be RF.

Today you may have one or two dozen wireless devices (radio, cell phone, TV, pager, car
key, a gaggle of remote-control units). Tomorrow, you will probably have thousands of
them.

One place you'll find these micro wireless devices will be on packaging, when RF
identification tags replace the universal product code--those little vertical bars read by
supermarket checkout scanners. With emerging print technologies, it will be possible to
print active tags directly onto containers--tiny computers that broadcast their ID, price and
other characteristics (such as the expiration date). A refrigerator or a medicine cabinet can
thus know what is inside it. A container could be aware of the absence or presence of each
pill. In the future, all these inanimate objects will be able to talk to one another and pass
messages among themselves.


Computers are coming out of the box. Our grandchildren will look back at the personal
computer as a quaint artifact, as common tomorrow as an ice chest is today.


It has taken technology forecasters far too long to realize that the Internet will have a billion
users and carry $1 trillion worth of e commerce within a year. Now they seem to get it,
but they are making a new mistake. The have got the pie right, but the distribution wrong.

Typically, the future of the Internet is described as being 50% U.S., 40% Europe, 5% Japan
and Korea, and 5% the rest of the world. Boy, is that ever wrong! Be prepared to be very
surprised by the rest of the world. Within three years, the developing world will represent
more than 50% of the Web. Three years after that, the most widely used language on the
Internet will be Chinese.

One reason we miscalculate the developing world's ability to leverage the power of the
Internet is that we underestimate the power of imperatives. A single connection can be
shared by many people and provide access to a whole world of libraries to a school that
previously did not even have books. The one-on-one use of computers we enjoy in the U.S.
is not the only way to be connected. For some reason we understand that a small and poor
company can suddenly compete with big and rich ones, but we do not realize that a small
and poor country can compete on the world market with big and rich ones too. And they
will, to everybody's surprise. You watch.

Will Tiny Robots Build Diamonds One Atom At A Time?
That's just for starters. Nanobots will also make ships, shoes, steaks--and more nanobots.
The trick is getting them to stop
BY MICHAEL D. LEMONICK

On its face, the notion seems utterly preposterous: a single technology so incredibly
versatile that it can fight disease, stave off aging, clean up toxic waste, boost the world's
food supply and build roads, automobiles and skyscrapers--and that's only to start with. Yet
that's just what the proponents of nanotechnology claim is going to be possible, maybe even
before the century is half over.

Crazy though it sounds, the idea of nanotechnology is very much in the scientific
mainstream, with research labs all over the world trying to make it work. Last January
President Clinton even declared a National Nanotechnology Initiative, promising $500
million for the effort.

In fact, nanotechnology has an impeccable and longstanding scientific pedigree. It was back
in 1959 that Richard Feynman, arguably the most brilliant theoretical physicist since
Einstein, gave a talk titled "There's Plenty of Room at the Bottom," in which he suggested
that it would one day be possible to build machines so tiny they would consist of just a few
thousand atoms. (The term nanotechnology comes from nanometer, or a billionth of a
meter; a typical virus is about 100 nanometers across.)

What would such a machine be good for? Construction projects, on the tiniest scale, using
molecules and even individual atoms as building blocks. And that in turn means you can
make literally anything at all, from scratch--for the altering and rearrangement of molecules
is ultimately what chemistry and biology come down to, and manufacturing is simply the
process of taking huge collections of molecules and forming them into useful objects.

Indeed, every cell is a living example of nanotechnology: not only does it convert fuel into
energy, but it also fabricates and pumps out proteins and enzymes according to the software
encoded in its DNA. By recombining DNA from different species, genetic engineers have
already learned to build new nanodevices--bacterial cells, for example, that pump out
medically useful human hormones.

But biotechnology is limited by the tasks cells already know how to carry out. Nanotech
visionaries have much more ambitious notions. Imagine a nanomachine that could take raw
carbon and arrange it, atom by atom, into a perfect diamond. Imagine a machine that
dismembers dioxin molecules, one by one, into their component parts. Or a device that
cruises the human bloodstream, seeks out cholesterol deposits on vessel walls and
disassembles them. Or one that takes grass clippings and remanufactures them into bread.
Literally every physical object in the world, from computers to cheese, is made of
molecules, and in principle a nanomachine could construct all of them.

Going from the principle to the practical will be a tall order, of course, but nanomechanics
have already shown that it's possible, using tools like the scanning tunneling electron
microscope, to move individual atoms into arrangements they'd never assume in nature: the
IBM logo, for example, or a map of the world at one ten-billionth scale, or even a
functioning submicroscopic guitar whose strings are a mere 50 nanometers across. They've
also designed, though not yet built, minuscule gears and motors made of a few score
molecules. (These should not be confused with the "tiny" gears and motors, built with
millions of molecules, that have already been constructed with conventional chip-etching
technique. Those devices are gargantuan compared with what will be built in the future.)

Within 25 years, nanotechnologists expect to move beyond these scientific parlor tricks and
create real, working nanomachines, complete with tiny "fingers" that can manipulate
molecules and with minuscule electronic brains that tell them how to do it, as well as how
to search out the necessary raw materials. The fingers may well be made from carbon
nanotubes--hairlike carbon molecules, discovered in 1991, that are 100 times as strong as
steel and 50,000 times as thin as a human hair.

Their electronic brains could themselves be made from nanotubes, which can serve both as
transistors and as the wires that connect them. Or they may be made out of DNA, which
can be altered to carry instructions that nature never intended. Armed with the proper
software and sufficient dexterity, a nanorobot, or nanobot, could construct anything at all.

Including copies of itself. To accomplish any sort of useful work, you'd have to unleash
huge numbers of nanomachines to do every task--billions in every bloodstream, trillions at
every toxic-waste site, quadrillions to put a car together. No assembly line could crank out
nanobots in such numbers.

But nanomachines could do it. Nanotechnologists want to design nanobots that can do two
things: carry out their primary tasks, and build perfect replicas of themselves. If the first
nanobot makes two copies of itself, and those two make two copies each, you've got a
trillion nanobots in no time, each one operating independently to carry out a trillionth of the
job.

But as any child who's seen Mickey Mouse wrestle with those multiplying broomsticks in
The Sorcerer's Apprentice can tell you, there's a dystopian shadow that hangs over this rosy
picture: What if the nanobots forget to stop replicating? Without some sort of built-in stop
signal, the potential for disaster would be incalculable. A fast-replicating nanobot
circulating inside the human body could spread faster than a cancer, crowding out normal
tissues; an out-of-control paper-recycling nanobot could convert the world's libraries to
corrugated cardboard; a rogue food-fabricating nanobot could turn the planet's entire
biosphere into one huge slab of Gorgonzola cheese.

Nanotechnologists don't dismiss the danger, but they believe they can handle it. One idea is
to program a nanobot's software to self-destruct after a set number of generations. Another
is to design nanobots that can operate only under certain conditions--in the presence of a
high concentration of toxic chemicals, for example, or within a very narrow range of
temperature and humidity. You might even program nanobots to stop reproducing when too
many of their fellows are nearby. It's a strategy nature uses to keep bacteria in check.

None of that will help if someone decides to unleash a nanotech weapon of some sort--a
prospect that would make computer viruses seem utterly benign by comparison. Indeed,
some critics contend that the potential dangers of nanotechnology outweigh any potential
benefits. Yet those benefits are so potentially enormous that nanotech, even more than
computers or genetic medicine, could be the defining technology of the coming century. It
may be that the world will end up needing a nanotech immune system, with police
nanobots constantly at microscopic war with destructive bots.

One way or another, nanotechnology is coming.

What Will Replace Silicon
Eventually the doubling and redoubling of computer power that has driven the information
age will cease. Then what?
BY MICHIO KAKU

The economic destiny and prosperity of entire nations may rest on one question: Can
silicon-based computer technology sustain Moore's law beyond 2020? Moore's law (see
sidebar) is the engine pulling a trillion-dollar industry. It's the reason kids assume that it's
their birthright to get a video-game system each Christmas that's almost twice as powerful
as the one they got last Christmas. It's the reason you can receive (and later throw away) a
musical birthday card that contains more processing power than the combined computers of
the Allied Forces in World War II.

The secret behind Moore's law is that chipmakers double every 18 months or so the number
of transistors that can be crammed onto a silicon wafer the size of a fingernail. They do this
by etching microscopic grooves onto crystalline silicon with beams of ultraviolet radiation.
A typical wire in a Pentium chip is now 1/500 the width of a human hair; the insulating
layer is only 25 atoms thick.

But the laws of physics suggest that this doubling cannot be sustained forever. Eventually
transistors will become so tiny that their silicon components will approach the size of
molecules. At these incredibly tiny distances, the bizarre rules of quantum mechanics take
over, permitting electrons to jump from one place to another without passing through the
space between. Like water from a leaky fire hose, electrons will spurt across atom-size
wires and insulators, causing fatal short circuits.

Of course, cyber Cassandras have been tolling the bell for Moore's law for decades. As
physicist Carver Mead puts it, "The Chicken Little sky-is-falling articles are a recurring
theme." But even Mead admits that by 2014 the laws of physics may have their final
revenge. Transistor components are fast approaching the dreaded point-one limit--when the
width of transistor components reaches .1 microns and their insulating layers are only a few
atoms thick. Last year Intel engineer Paul Packan publicly sounded the alarm in Science
magazine, warning that Moore's law could collapse. He wrote, "There are currently no
known solutions to these problems."

The key word is known. The search for a successor to silicon has become a kind of crusade;
it is the Holy Grail of computation. Among physicists, the race to create the Silicon Valley
for the next century has already begun. Some of the theoretical options being explored:

What Will Replace The Internet?
First it will become wireless and ubiquitous, crawling into the woodwork and perhaps even
under our skin. Eventually, it will disappear
BY VINTON CERF

The Internet seems to have just arrived, so how can we possibly imagine what will replace
it? In truth, early versions of the Net have been around since the 1960s and '70s, but only
after the mid-1990s did it begin to have a serious public impact. Since 1994, the population
of users has grown from about 13 million to more than 300 million around the world.
About half are in North America, and most--despite significant progress in rolling out high-
speed access--still reach the Internet by way of the public telephone network.

What will the Internet be like 20 years from now?

Like the rest of infrastructure, the Internet will eventually seem to disappear by becoming
ubiquitous. Most access will probably be via high-speed, low-power radio links. Most
handheld, fixed and mobile appliances will be Internet enabled. This trend is already
discernible in the form of Internet-enabled cell phones and personal digital assistants. Like
the servants of centuries past, our household helpers will chatter with one another and with
the outside help.

At some point, the armada of devices we strap to our bodies like tools on Batman's belt will
coalesce into a smaller number of multifunction devices. Equipped with radio links, a pda
can serve as an appliance-control remote, a digital wallet, a cell phone, an identity badge,
an e-mail station, a digital book, a pager and perhaps even a digital camera. There is sure to
be a catchy name for this all-purpose Internet-enabled thingy, perhaps Wireless Internet
Digital Gadget for Electronic Transactions, or WIDGET.

So many appliances, vehicles and buildings will be online by 2020 that it seems likely there
will be more things on the Internet than people. Internet-enabled cars and airplanes are
coming online, and smart houses are being built every day. Eventually, programmable
devices will become so cheap that we will embed them in the cardboard boxes into which
we put other things for storage or shipping. These passive "computers" will be activated as
they pass sensors and will be able to both emit and absorb information. Such innovations
will facilitate increasingly automatic manufacturing, inventory control, shipping and
distribution. Checkout at the grocery store will be fully automatic, as will payment via your
digital wallet.

The advent of programmable, nanoscale machines (see "Will Tiny Robots Build Diamonds
One Atom at a Time?" in this issue) will extend the Internet to things the size of molecules
that can be injected under the skin, leading to Internet-enabled people. Such devices,
together with Internet-enabled sensors embedded in clothing, will avoid a hospital stay for
medical patients who would otherwise be there only for observation. The speech processor
used today in cochlear implants for the hearing impaired could easily be connected to the
Internet; listening to Internet radio could soon be a direct computer-to-brain experience!

The Internet will undergo substantial alteration as optical technologies allow the
transmission of many trillions of bits per second on each strand of the Internet's fiber-optic
backbone network. The core of the network will remain optical, and the edges will use a
mix of access technologies, ranging from radio and infrared to optical fiber and the old
twisted-pair copper telephone lines. By then, the Internet will have been extended, by
means of an interplanetary Internet backbone, to operate in outer space.

How will this pervasive Internet access affect our daily lives? More and more of the world's
information will be accessible instantly and from virtually anywhere. In an emergency, our
health records will be available for remote medical consultation with specialists and
perhaps even remote surgery. More and more devices will have access to the global
positioning system, increasing the value of geographically indexed databases. Using GPS
with speech-understanding software that is emerging today, we will be able to get
directions from our WIDGETS as easily as we once got them at a filling station. One can
imagine driving in the car, asking our WIDGET for the name of the nearest Thai restaurant,
getting an answer, asking for reservations and then for directions. Indeed, the car may be
smart enough to handle the entire transaction and drive us there itself.

Is there any downside to a society suffused with information and the tools to process it?

Privacy will come at a premium. Enormous quantities of data about our daily affairs will
flow across the Internet, working to make our lives easier. Despite our penchant for giving
up privacy in exchange for convenience, our experiences online may make us yearn for the
anonymity of the past. Who should have access to our medical records and our financial
information, and how will that access be controlled? Will we be able to search and use the
vast information stored online without leaving trails of personal cookie crumbs scattered
across the Net? How will business transactions be taxed, and in what jurisdictions will
disputed electronic transactions be resolved? How will intellectual property be protected?
How will we prove that contracts were signed on a certain date, or that their terms and
conditions have not been electronically altered? There are technical answers for many of
these questions, but some will require international agreements before they can be resolved.


Perhaps even more daunting, in the face of Internet-wide virus attacks, is the realization
that we will depend in larger and larger measure on the network's functioning reliably.
Making this system of millions of networks sufficiently robust and resilient is a challenge
for the present generation of Internet engineers. Failure could portend an increasingly
fragile future. But I am an optimist. I believe we are going to live in a world abundant with
information and with the tools needed to use it wisely.

Will AOL Own Everything?
America Online could do in the early 21st century what Microsoft did at the end of the
20th: control the flow of key technologies
BY LAWRENCE LESSIG

America Online is America's largest Internet service provider. Twenty-two million
members get to the Internet through AOL. If it were a state, AOL would rank second in the
nation in population, behind California. The company has a market capitalization of $125
billion--a bit less than the GDP of Denmark. And with its proposed purchase of one of the
largest and most powerful media giants, Time Warner, many are beginning to ask, Should
we worry about AOL the way the government worries about Microsoft?

Maybe. But to see why, we've got to look at something politicians don't talk about much--
architecture.

At the core of the Internet is a principle of design described by network architects Jerome
Saltzer, David P. Reed and David Clark as "end-to-end." The principle of e2e says, Keep
the network simple, and build intelligence in the applications ("ends"). Simple networks,
smart applications--this was the design choice of the Internet's founders.

The reason was innovation. Simple networks can't discriminate; they are inherently neutral
among network uses. Innovators thus don't need to negotiate with every network owner
before a new technology is available to all. The burden on innovation is kept small;
innovation is, in turn, large.

AOL has benefited from this neutrality. Because regulators breaking up AT&T forced the
telephone company to respect e2e neutrality, consumers of telephone service have always
had the right to choose the Internet service provider they want, not the ISP the telephone
company is pushing. This built an architecture of extraordinary competition among ISPs.
AOL, by delivering what consumers want, has prevailed in this competition.

All this may change, however, as Internet access moves from narrowband (telephones) to
broadband (predominantly cable). Cable companies are not required to respect e2e; they are
allowed to discriminate. Unlike telephone companies, they get to choose which "new ideas"
will run on cable's network. They get to block services they don't like. Already many limit
the streaming of video to computers (while charging a premium for streaming video to
televisions). And this is only the beginning. The list of blocked uses is large and growing.

This trend worries many. AOL fought restrictions when AT&T (after buying a gaggle of
cable monopolies) proposed them. But now AOL, by buying Time Warner, is buying its
own cable monopolies. And many are worried that AOL will forget its roots. Will the
temptation to build its broadband network to protect itself against unallied content and new
innovation be too great? Will AOL, like every other large-scale network that has controlled
content and conduit, pick a closed rather than an open architecture? Will AOL become
what it eats?

Compromising on the principle of e2e would weaken the Internet. It would increase the
costs of innovation. If to deploy a new technology or the next killer application--like the
World Wide Web was in the early 1990s or gadgets to link the home to the Net may
someday become--you first have to negotiate with every cable interest or with every AOL,
then fewer innovations will be made. The Internet will calcify to support present-day uses--
which is great for the monopolies of today but terrible for the future that the Internet could
be.

An analogous issue is at stake in the government's case against Microsoft. Microsoft argues
that it has furthered innovation by providing a platform upon which many application
developers have been able to write code. No doubt it has--generally. But the government
attacked cases where Microsoft used its power over the platform to stifle technologies that
threatened Microsoft's monopoly. The charge was that Microsoft's strategic behavior
undermined innovation that was inconsistent with Microsoft's business.


The Microsoft case was about the platform of the 1990s--Windows. The risk that AOL
presents is to the platform of the 21st century--the Internet. In both cases, the question is
whether a strategic actor can chill innovation. With the Internet, that answer depends upon
the principles built into the Net.

AOL promises it will behave. It has been a strong defender of "open access" in the past.
But its promises are not binding, its slowness in allowing other instant-messaging services
onto its platform is troubling, and last month's squabble over access to ABC on Time
Warner's network is positively chilling. These are not signs that the principle that built the
Internet thrives.

The test will be whether AOL sticks to the principle of e2e, and if it doesn't, whether the
government will understand enough to defend the principle in response. If AOL respects
e2e in broadband, if it keeps the platform of the network neutral among new uses, if it
builds a guarantee into its architecture that innovation will be allowed and encouraged, then
we should not worry so much about what AOL owns. Only when it tries to own (through
architecture) the right to innovate should we worry.

Sustaining a neutral platform for innovation will be the challenge of the next quarter-
century. The danger is the view--common among politicians--that this neutrality takes care
of itself. But we have never seen the owners of a large-scale network voluntarily choose to
keep it open and free; we should not expect such altruism now. The Internet has taught us
the value of such a network. But the government should not be shy to make sure we don't
forget it.

Lessig, who served as an adviser to Judge Jackson in the Microsoft case, is a Harvard law
professor, a fellow at Berlin's Wissenschaftskolleg and author of Code and Other Laws of
Cyberspace

Is Technology Moving Too Fast?
Self-accelerating technologies--computers that make faster computers, for example--may
have a destabilizing effect on society
BY STEWART BRAND

Self-accelerating technologies--computers that make faster computers, for example--may
have a destabilizing effect on society The newest technologies--computers, genetic
engineering and the emerging field of nanotech--differ from the technologies that preceded
them in a fundamental way. The telephone, the automobile, television and jet air travel
accelerated for a while, transforming society along the way, but then settled into a
manageable rate of change. Each was eventually rewarded more for staying the same than
for radically transforming itself--a stable, predictable, reliable condition known as "lock-
in."

Computers, biotechnology and nanotech don't work that way. They are self-accelerating;
that is, the products of their own processes enable them to develop ever more rapidly. New
computer chips are immediately put to use developing the next generation of more
powerful ones; this is the inexorable acceleration expressed as Moore's law. The same
dynamic drives biotech and nanotech--even more so because all these technologies tend to
accelerate one another. Computers are rapidly mapping the DNA in the human genome,
and now DNA is being explored as a medium for computation. When nanobots are finally
perfected, you can be sure that one of the first things they will do is make new and better
nanobots.

Technologies with this property of perpetual self-accelerated development--sometimes
termed "autocatalysis"--create conditions that are unstable, unpredictable and unreliable.
And since these particular autocatalytic technologies drive whole sectors of society, there is
a risk that civilization itself may become unstable, unpredictable and unreliable.

Perhaps what civilization needs is a NOT-SO-FAST button. Proponents of technological
determinism make a strong case for letting self-accelerating technologies follow their own
life cycle. Rapid development in computer technology, they point out, has spun off robotics
and the Internet--to the great benefit of industry and human communications. Besides, it
isn't so easy for a free society to put the brakes on technology. Even if one country decided
to forgo the next technological revolution, another country would gladly take it up.

There are scenarios, however, in which technology may brake itself. In the aging
population of the developed world, many people are already tired of trying to keep up with
the latest cool new tech. Youth-driven tech acceleration could be interpreted as simple
youthful folly--shortsighted, disruptive, faddish. The market for change could dry up, and
lock-in might again become the norm. Stress and fatigue make powerful decelerators.

So do religious and cultural factors. Radical new technologies are often seen as moral
threats by conservative religious groups or as economic and cultural threats by political
groups. Powerful single-issue voting blocs like the antiabortionists could arise. Or terrorists
like Theodore Kaczynski.

Change that is too rapid can be deeply divisive; if only an elite can keep up, the rest of us
will grow increasingly mystified about how the world works. We can understand natural
biology, subtle as it is, because it holds still. But how will we ever be able to understand
quantum computing or nanotechnology if its subtlety keeps accelerating away from us?

Constant technological revolution makes planning difficult, and a society that stops
planning for the future is likely to become a brittle society. It could experience violent
economic swings. It could trip into wars fought with vicious new weapons. Its pervasive
new technologies could fail in massive or horrible ways. Or persistent, nagging small
failures could sap the whole enterprise.

With so many powerful forces in play, technology could hyperaccelerate to the stars with
stunning rapidity, or it could stall completely. My expectation is that it will do both, with
various technologies proceeding at various rates. The new technologies may be self-
accelerating, but they are not self-determining. They are the result of ever renegotiated
agreement with society. Because they are so potent, their paths may undergo wild
oscillations, but I think the trend will be toward the dynamic middle: much slower than the
optimists expect, much faster than the pessimists think humanity can bear.

Stewart Brand, creator of the Whole Earth Catalog, is co-founder of Global Business
Network and author of The Clock of the Long Now

Will Low Tech Replace High Tech?
BY NANA NAISBITT

We are intoxicated by technology. We are seduced by its power, its speed, its gadgetry and
its promise to solve the problems of human suffering. As those problems get bigger and as
technology offers new solutions, low tech is unlikely to make a comeback. Technology is a
carrot we have trotted after for a long time, and as it speeds up, we gallop after it.

But high tech does not stay high tech forever. Nor does it march in a straight line. The
unanticipated and unintended consequences of new technology can be as significant as its
promise, especially if we proceed without comprehending the scope of technology's impact
on humanity and the planet.

High tech implies progress, while low tech feels outdated. A stone wheel, an arrowhead, a
shuttle loom were once high tech; today they are museum pieces. Phonographs, at one time
considered high tech, are now collectibles, as are 45s and LPs. (See, for example, the
offerings on eBay.) High tech becomes low tech with longevity and familiarity and as old
technologies are replaced.

Even the most celebrated technologies of the past are now regarded as low tech. Take the
Panama Canal, an unparalleled feat of human vision, perseverance and engineering 85
years ago. Standing at the mouth of the canal, in the northern port city of Colon, peering out
at the cargo ships, you get an overwhelming sense that you're witnessing an archaic
process. Heavy ships traversing the surface of the globe, loaded down with computer parts,
petroleum products and Pokemon cards, pause in mid-voyage to pass slowly through the
strategically placed Isthmus of Panama before continuing their journey to another part of
the world. Someday nanotechnology may make manufacturing products from raw materials
in one part of the world and shipping them to another a thing of the past.

In the coming years, high tech will increasingly look low tech as we solve problems by
turning to biology and microscopic particles where we once turned to engineering and
information technologies. Today a rig churning and cranking to sop up an oil spill mars an
ocean landscape; tomorrow genetically engineered micro-organisms will be sent into the
ocean to clean up an oil spill invisibly.

In our daily lives, high-tech experiences are increasingly replacing low-tech ones, and if we
manage to design every square inch of the planet, then every experience will be a simulated
one. Nature museums are cropping up in urban centers, a clear signal that the environment
is in as much need of preservation as are arrowheads, shields and shuttle looms. Simulation
has become the most popular experience in modern American culture. A North American
child may play his snowboarding video four times a week, but he whizzes down a mountain
only four days a year.

Simulation need not be noisy. On the grounds of the J. Paul Getty Center in Los Angeles, a
beautiful stream stair-steps down the garden. Man-made, it is a symbol of how willing we
are to substitute our natural environment for a world of our making, of our own image, but
not without huge consequences.

Human population and our technological advances are increasing exponentially and, if the
predictions of many scientists are borne out, so will our problems--widespread famine,
massive shortages of clean water, unstoppable viruses, flooding, global warming (or
cooling), a vanishing natural environment and mass extinction. Meanwhile, technology
promises to solve these problems--feed the world, eliminate industrial waste, clean up the
environment, predict climates and earthquakes, reduce human suffering and extend human
life. In the short run, low tech will not replace high, if only because we need increasingly
sophisticated technologies to solve the very problems technology created.

In the long run, however, low tech may endure. Terry Erwin, a research entomologist and
curator at the Smithsonian Institution, thinks in terms of millions of years. Humans may be
the most successful species on the planet after the insects, Erwin says, but we have
triggered a global species extinction that rivals those caused by ice ages, floods and killer
asteroids--with a difference. "Those extinctions were more or less local, zonal or regional,"
says Erwin. "The present trend, the so-called sixth massive extinction process, is worldwide
and atmospheric, totally pervasive."

Nobody wants to leave the planet a red, barren dust bowl like Mars, incapable of sustaining
any life except the hardiest surviving lineages. But unless we anticipate the impact of
technologies--before they are practical--it will be impossible for us to apply them
thoughtfully, fruitfully and respectfully, or to ensure the survival of our species within the
gift of life on earth.

Nana Naisbitt co-wrote High Tech High Touch with her father John Naisbitt, a social
forecaster and the author of Megatrends

S-ar putea să vă placă și