Sunteți pe pagina 1din 79

Close

Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Engineering

Beyond Terminator: Squishy


"Octobot" Heralds New Era of Soft
Robotics
Ditching conventional electronics and power sources, the pliable robot operates without rigid parts

 By Helen Shen, Nature magazine on August 25, 2016


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

The octobot is powered by a chemical reaction and controlled with a soft logic board. A reaction inside the bot
transforms a small amount of liquid fuel (hydrogen peroxide) into a large amount of gas, which flows into the
octobot's arms and inflates them like a balloon. A microfluidic logic circuit, a soft analog of a simple electronic
oscillator, controls when hydrogen peroxide decomposes to gas in the octobot. Credit: LORI SANDERS
Advertisement |
Report Ad

A squishy octopus-shaped machine less than 2 centimetres tall is making waves in the field of soft robotics. The
‘octobot’ described today in Nature is the first self-contained robot made exclusively of soft, flexible parts.

Interest in soft robots has taken off in recent years, as engineers look beyond rigid Terminator-type machines to
designs that can squeeze into tight spaces, mould to their surroundings, or handle delicate objects safely. But
engineering soft versions of key parts has challenged researchers. “The brains, the electronics, the batteries—those
components were all hard,” says roboticist Daniela Rus at the Massachusetts Institute of Technology in Cambridge.
“This work is new and really exciting.”

The octobot is made of silicone rubber. Its ‘brain’ is a flexible microfluidic circuit that directs the flow of liquid fuel
through channels using pressure-activated valves and switches. “It’s an analogy of what would be an electrical circuit
normally,” says engineer Robert Wood at Harvard University in Cambridge, Massachusetts, one of the study’s leaders.
“Instead of passing electrons around, we're passing liquids and gases.”

Valves and switches in the robot’s brain are positioned to extend the arms in two alternating groups. The process
starts when researchers inject fuel into two reservoirs, each dedicated to one group of four arms. These reservoirs
expand like balloons and push fuel through the microfluidic circuit. As fuel travels through the circuit, changes in
pressure close off some control points and open others, restricting flow to only one half of the system at a time. As
that side consumes fuel, its internal pressure decreases, allowing fuel to enter the other side—which then pinches off
the first side, and so on.

Building a better bot


The robot's brain talks to its limbs through 3D-printed channels embedded in the body. To create the body,
researchers poured silicone polymers into an octopus-shaped mould. Then, using a 3D printer, they injected special
inks that maintained their form and position in the surrounding polymer. The scientists heated the octobot to cure its
structure, which also caused the ink to evaporate—leaving behind a hollow network that infiltrates the octobot's limbs
and links to its brain.

Many soft robots are tethered to compressed air tanks that provide power, but this can restrict their range of motion.
Wood and his colleagues take a different approach, using a chemical reaction to power the octobot.

Their fuel is a 50% hydrogen peroxide solution. When this is exposed to platinum infused into two segments of the
robot's internal network, it rapidly decomposes into a greater volume of water and oxygen. The resulting burst of
pressurized gas in each segment inflates and extends one set of arms, eventually exiting through exhaust vents.

“The combination of the microfluidics with the chemical reaction is really interesting,” says Cecilia Laschi, a roboticist
at the Sant'Anna School of Advanced Studies in Pisa, Italy. “It's a completely new and different way to see soft robots.”

The octobot currently runs for up to 8 minutes on 1 millilitre of fuel. It is not designed to perform any particular task,
and doesn’t mimic the motions of a real octopus. Instead, it demonstrates the technology, says Wood. In the future,
more sophisticated microfluidic circuits might improve endurance, and allow more complex movements when paired
with the appropriate limb designs, the authors suggest.

“Now what needs to be worked out is how to reprogram the robots to perform different actions, to respond to the
environment, and not just perform a pre-programmed sequence,” says materials engineer Robert Shepherd at Cornell
University in Ithaca, New York. Shepherd is especially keen to see whether souped-up microfluidic circuits can be
combined with flexible sensors to make smarter soft robots that are better able to adapt to changing conditions.

This article is reproduced with permission and was first published on August 24, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Helen Shen

Recent Articles

 Brain-Data Gold Mine Could Reveal How Neurons Compute


 Meet the Soft, Cuddly Robots of the Future

 New Oxytocin Neuroscience Counters "Cuddle Hormone" Claims

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space
China Launches Second Space Lab
Tiangong 2 will develop expertise for a future space station and conduct science experiments

 By Davide Castelvecchi, Nature magazine on September 16, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
The Tiangong-2 space laboratory blasts off from Jiuquan Satellite Launch Center on September 15, 2016 in Jiuquan,
Gansu Province of China. Credit: VCG Getty Images
Advertisement |
Report Ad

China has launched Tiangong 2, its second orbiting space lab—marking another stepping stone towards the country’s
goal of building a space station by the early 2020s. The module, which launched aboard a Long March rocket from the
Jiuquan Satellite Launch Center in the Gobi desert at 22:04 local time on 15 September, will initially fly uncrewed in
low-Earth orbit, but a planned second launch will carry two astronauts to it in November.

Tiangong 2 (meaning ‘heavenly palace’) carries a number of scientific experiments, including an astrophysics detector
that is the first space-science experiment built jointly by China with European countries.

“By itself, Tiangong 2 is not a monumental achievement, but it is an important step in a larger effort to eventually
build a Chinese space station in the early 2020s,” says Brian Weeden, a space-policy expert at the Secure World
Foundation in Washington DC.

The 8-tonne module replaces the now-defunct Tiangong 1, a mission that marked several milestones in China’s
manned space programme, including the country’s first in-orbit rendezvous with another spacecraft. Mission control
lost contact with that station earlier this year, and its orbit is slowly decaying. An uncontrolled re-entry is expected
some time in 2017.

In November, a Shenzhou spacecraft will carry two astronauts to Tiangong-2 for a 30-day stay. Then in April 2017, a
cargo craft will dock to refuel and bring more supplies. The module also carries a robotic arm, a prototype for a
similar tool that would fly on a space station.

Science projects
Tiangong 2 reportedly carries 14 experiments. These include POLAR, an international mission dedicated to
establishing whether the photons from γ-ray bursts (GRBs)—thought to be a particularly energetic type of stellar
explosion—are polarized. Answering this long-debated issue could shed light on how GRBs produce such high-energy
photons in the first place.

“We aim to measure ten γ-ray bursts per year,” says POLAR project manager Nicolas Produit, an astrophysicist at the
University of Geneva in Switzerland, who spoke to Nature from a hotel near the Jiuquan launch centre.

The €3-million (US$3.4 million) detector was built largely with Swiss funding, and with the collaboration of Swiss,
Chinese and Polish scientists, and support from the European Space Agency (ESA). POLAR is the first space
experiment developed as a full international collaboration between China and other countries, Produit says.

US law bars NASA from doing joint projects with China’s space agencies, but the Chinese Academy of Sciences is
discussing a number of other space collaborations with ESA. The country has also been aggressively ramping up its
space science: just in the last year, it put into orbitDAMPE, its first space probe dedicated to the search for dark
matter, as well as QUESS, the world’s first quantum-communications satellite.

This is making the country an exciting place for international researchers to test ideas for space science, compared to
projects run by ESA and NASA, which Produit says are slower-moving. “In China, things go fast. They have the
money; they have the will,“ Produit says. “China is where things happen now.”

Still, the main goal for Tiangong 2 and a future space station is not science, Weeden points out. “China wants to build
and operate a space station for the same reasons the United States and Soviet Union did in decades past: prestige.”

This article is reproduced with permission and was first published on September 15, 2016.

Advertisement |
Report Ad
ABOUT THE AUTHOR(S)
Davide Castelvecchi

Davide Castelvecchi is a senior reporter at Nature in London covering physics, astronomy, mathematics and computer
science.

Recent Articles

 Universe Has 10 Times More Galaxies Than Researchers Thought


 Can We Open the Black Box of AI?

 Deep Learning Boosts Google Translate Tool

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects


Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

Fear of Spreading Earth Germs on


Could Divert Mars Rover
International rules on microbe contamination may stymie Curiosity's bid to study suspected water in hillside streaks

 By Alexandra Witze, Nature magazine on September 7, 2016


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

These dark, narrow, 100 meter-long streaks called recurring slope lineae flowing downhill on Mars are inferred to
have been formed by contemporary flowing water. Credit: NASA, JPL, University of Arizona
Advertisement |
Report Ad

Four years into its travels across Mars, NASA’s Curiosity rover faces an unexpected challenge: wending its way safely
among dozens of dark streaks that could indicate water seeping from the red planet’s hillsides.

Although scientists might love to investigate the streaks at close range, strict international rules prohibit Curiosity
from touching any part of Mars that could host liquid water, to prevent contamination. But as the rover begins
climbing the mountain Aeolis Mons next month, it will probably pass within a few kilometres of a dark streak that
grew and shifted between February and July 2012 in ways suggestive of flowing water.
NASA officials are trying to determine whether Earth microbes aboard Curiosity could contaminate the Martian seeps
from a distance. If the risk is too high, NASA could shift the rover’s course—but that would present a daunting
geographical challenge. There is only one obvious path to the ancient geological formations that Curiosity scientists
have been yearning to sample for years.

“We’re very excited to get up to these layers and find the 3-billion-year-old water,” says Ashwin Vasavada, Curiosity’s
project scientist at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. “Not the ten-day-old water.”

The streaks—dubbed recurring slope lineae (RSLs) because they appear, fade away and reappear seasonally on steep
slopes—were first reported on Mars five years ago in a handful of places. The total count is now up to 452 possible
RSLs. More than half of those are in the enormous equatorial canyon of Valles Marineris, but they also appear at
other latitudes and longitudes. “We’re just finding them all over the place,” says David Stillman, a planetary scientist
at the Southwest Research Institute in Boulder, Colorado, who leads the cataloguing.

Dark marks
RSLs typically measure a few metres across and hundreds of metres long. One leading idea is that they form when the
chilly Martian surface warms just enough to thaw an ice dam in the soil, allowing water to begin seeping downhill.
When temperatures drop, the water freezes and the hillside lightens again until next season. But the picture is
complicated by factors such as potential salt in the water; brines may seep at lower temperatures than fresher water.

Other possible explanations for the streaks include water condensing from the atmosphere, or the flow of bone-dry
debris. “They have a lot of behaviours that resemble liquid water,” says Colin Dundas, a planetary geologist at the US
Geological Survey in Flagstaff, Arizona. “But Mars is a strange place, and it’s worth considering the possibility there
are dry processes that could surprise us.”

A study published last month used orbital infrared data to suggest that typical RSLs contain no more than 3% water.
And other streaky-slope Martian features, known as gullies, were initially thought to be caused by liquid water but are
now thought to be formed mostly by carbon dioxide frost.

Dundas and his colleagues have counted 58 possible RSLs near Curiosity’s landing site in Gale Crater. Many of them
appeared after a planet-wide dust storm in 2007—possibly because the dust acted as a greenhouse and temporarily
warmed the surface, Stillman says.

Since January, mission scientists have used the ChemCam instrument aboard the rover—which includes a small
telescope—to photograph nearby streaks whenever possible.

So far, the rover has taken pictures of 8 of the 58 locations and seen no changes. The features are lines on slopes, but
they have not yet recurred. “We’ve got two of the three letters in the acronym,” says Ryan Anderson, a geologist at the
US Geological Survey who leads the imaging campaign.

Curiosity is currently about 5 kilometres away from the potential RSLs; on its current projected path, it would never
get any closer than about 2 kilometres, Vasavada says. The rover could not physically drive up and touch the streaks if
it wanted to, because it cannot navigate the slopes of 25 degrees or greater on which they appear.

But the rover’s sheer unexpected proximity to RSLs has NASA re-evaluating its planetary-protection protocols.
Curiosity was only partly sterilized before going to Mars, and experts at JPL and NASA headquarters in Washington
DC are calculating how long the remaining microbes could survive in Mars’s harsh atmosphere—as well as what
weather conditions could transport them several kilometres away and possibly contaminate a water seep. “That hasn’t
been well quantified for any mission,” says Vasavada.

The work is an early test for the NASA Mars rover slated to launch in 2020, which will look for life and collect and
stash samples for possible return to Earth. RSLs exist at several of the rover’s eight possible landing sites.

For now, Curiosity is finishing exploring the Murray Buttes. These spectacular rock towers formed from sediment at
the bottom of ancient lakes—the sort of potentially life-supporting environment the rover was sent to find. Curiosity’s
second extended mission begins on 1 October.
Barring disaster, the rover’s lifespan will be set by its nuclear-power source, which will continue to dwindle in coming
years through radioactive decay. Curiosity still has kilometres to scale on Aeolis Mons as it moves towards its final
destination, a sulfate-rich group of rocks.

This article is reproduced with permission and was first published on September 7, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Icy Heart Could Be Key to Pluto's Strange Geology


 Jupiter Mission's Computer Glitch Delays Data-Gathering

 Incoming! Space Rocks Strike the Moon More Than Expected

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store
SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Energy

Flagship U.S. Fusion Reactor Breaks


Down
A design flaw introduced during a recent upgrade could keep the reactor offline for at least a year

 By Jeff Tollefson, Nature magazine on September 30, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
The core of the Sun is a natural thermonuclear reactor. Physicists hope to eventually replicate similar conditions in
experimental reactors on Earth to produce abundant and clean energy. Credit: NASA/SDO
Advertisement |
Report Ad

A tough year just got tougher for US fusion researchers. The country’s flagship experimental fusion reactor has
broken down, less than a year after completing a four-year, US$94-million upgrade. Now officials at the Princeton
Plasma Physics Laboratory (PPPL) in New Jersey are investigating whether problems encountered during fabrication
of a key component caused the reactor to fail.

Lab officials say that the machine could be offline for up to a year. Making matters worse, one of the other two fusion
reactors funded by the US Department of Energy (DOE) is scheduled to shut down on 30 September. That leaves US
scientists with just one major facility to conduct fusion experiments, at the defence contractor General Atomics in San
Diego, California.

“It’s definitely a challenge for everybody,” says Earl Marmar, who oversees the Alcator C-Mod reactor at the
Massachusetts Institute of Technology in Cambridge that is shutting down after more than two decades. “We won’t be
completely without access to experimental facilities, but it’s definitely not as good as it could have been for the coming
year.”

The upgraded Princeton reactor, called the National Spherical Torus Experiment Upgrade (NSTX-U), is twice as
powerful as its predecessor. Like other 'tokamak' reactors, including the international ITER project under
construction in France, the spherical machine uses magnetic fields to confine a hydrogen plasma. That plasma is then
heated until the atoms fuse and release energy. In theory, fusion could power the world indefinitely—and cleanly.

The Princeton machine’s breakdown came to light on 27 September, after PPPL director Stewart Prager resigned.
Laboratory officials say that the upgraded reactor started operating at low power in December 2015 and produced 10
weeks of high-quality data. Scientists shut it down in July after discovering that one of the coils that creates the
electromagnetic trap was malfunctioning.
Prager says he was thinking about stepping down as director before the reactor coil broke. He elected to depart now,
after eight years, so that new leadership can carry the investigation forward and repair the machine. “It’s sort of a
normal passing of the baton,” he says.

Hunting for clues


PPPL officials initially declined to speculate about the cause of the coil malfunction, saying that an investigation is
under way. But the lab later confirmed to Nature that questions about the strength of the copper in the faulty coil
arose, and were investigated, when the part was being fabricated.

That fact that these concerns arose during the tokamak upgrade suggests that a more careful analysis could have
prevented the reactor failure, says Stephen Dean, president of Fusion Power Associates, an advocacy group in
Gaithersburg, Maryland.. “Mistakes like this do sometimes get made, but with all of the experience the fusion
programme has, it should not have happened this way.”

NSTX-U programme director Jonathan Menard says that the finished coil met the laboratory’s specifications. He adds
that it is not clear whether the part’s design or the manufacturing process caused problems. Another coil in the
reactor, of a similar design and fabricated from the same grade of copper, has functioned well. The laboratory is
planning to replace it nonetheless.

A former researcher at the Princeton laboratory, who declined to be named because he is not authorized to speak
about the issue, says that the copper in the faulty coil might have been stronger than it needed to be. That made it
harder to bend the metal into the desired shape. Even tiny faults in fabrication can cause problems when energy is
coursing through the reactor, heating up the coils.

Menard says that after the coil malfunctioned, X-ray analyses found structural anomalies that may have resulted from
internal melting when the reactor was operating. PPPL scientists plan to cut the coil open for further investigations.
“We are going to have to wait for those results to make a more definitive statement,” he says.

Uncertain future
Officials aren’t sure how much it will cost to repair the reactor, but say that it could take up to a year to bring it back
online. Because the fusion reactor was already scheduled to halt operations in late 2016 for six months of
maintenance, the net loss of research time may wind up being about six months.

The breakdown’s impacts could extend well beyond the Princeton lab. Marmar had planned to shift people to the
Princeton facility once MIT’s Alcator reactor shut down. Now, MIT researchers will help Princeton restart its reactor—
and try to conduct their previously planned research by collaborating with teams at General Atomics' reactor and
facilities in other countries.

The DOE decided several years ago to shutter the MIT reactor, but to maintain facilities in Princeton and San Diego.
The US Congress reversed that decision once, in 2014, but the US government's 2016 budget assumes that the MIT
reactor will shut down.

The DOE says that the US fusion-research programme remains on a solid footing, with extensive international
partnerships, and will be back at full strength once the Princeton machine returns to service. Others are concerned
about how researchers will cope with only one major US reactor in operation.

Dean thinks the agency ought to keep Alcator C-Mod running another year, until the Princeton reactor is fixed. “It’s
not a good situation for our scientists to only have one machine running,” he says.

Marmar is ready to restart the MIT reactor if the DOE changes its mind. “The C-Mod facility is planned to be put into
a safe shutdown state,” he says, “but if desired, could be brought back into service on short notice to support the US
and international fusion community.”

This article is reproduced with permission and was first published on September 30, 2016.
Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Jeff Tollefson

Jeff Tollefson works for Nature magazine.

Recent Articles

 Trump versus Clinton: Worlds Apart on Science


 Obama's Top Scientist Talks Shrinking Budgets, Donald Trump and His Biggest Regret

 Dry Amazon Could See Record Fire Season

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register
 Facebook

 Twitter

 Google+

 YouTube

 RSS

Natural Disasters

Giant, Deadly Ice Slide Baffles


Researchers
Climate change could be to blame for Tibetan tragedy

 By Jane Qiu, Nature magazine on August 23, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
A glacier in Tibet. Credit: GALEN ROWELL Getty Images
Advertisement |
Report Ad

One of the world's largest documented ice avalanches is flummoxing researchers. But they suspect that glacier
fluctuations caused by a changing climate—may be to blame.

About 100 million cubic metres of ice and rocks gushed down a narrow valley in Rutog county in the west of the Tibet
Autonomous Region on July 17, killing nine herders and hundreds of sheep and yaks.

The debris covered nearly 10 square kilometres at a thickness of up to 30 metres, says Zong Jibiao, a glaciologist at
the Chinese Academy of Sciences’ Institute of Tibetan Plateau Research (ITPR) in Beijing, who completed a field
investigation of the site last week.

The only other known incident comparable in scale is the 2002 ice avalanche from the Kolka Glacier in the Caucasus
Mountains in Russia, says Andreas Kääb, a glaciologist at the University of Oslo in Norway. That catastrophic event
killed 140 people.

Preliminary analyses show that the Rutog avalanche was unusual because it started from a flat point at 5,200–6,200
metres above sea level rather than in steep terrain. The ice crashed down nearly one kilometre along the narrow gully
and ran into the Aru Co lake, 6 kilometres away.

“The site of collapse is baffling … the Rutog avalanche initiated at quite a flat spot. It doesn’t make sense,” says Tian
Lide, a glaciologist also at the ITPR, who runs a research station in Rutog.

Zong adds: “It went with such a force that the gully was widened out by the process."

Glacier surge
This force is likely to have been caused by lubrication of the ice from rain or glacial melt, and researchers think that
increasing precipitation in recent years may be partly to blame.

Temperatures in Tibet have soared by 0.4 °C per decade since 1960—twice the global average. Warming can generate
meltwater that carves out a glacier from within, making it vulnerable to collapse, says Tian.

Kääb thinks that both the Kolka and Rutog avalanches could have been triggered by a rare glacier surge, in which a
glacier periodically advances 10–100 times faster than its normal speed. The phenomenon affects about 1% of glaciers
globally.

Western Tibet has many surge-type glaciers, and some researchers suspect that climate change at high elevations
could affect the frequency of surges.

Regardless of what triggered the Rutog avalanche, “climate change is causing more glacial hazards through
mechanisms we don’t fully understand”, says Tian. “There is an urgent need for more monitoring and research efforts,
especially in populated areas in high mountains.”

This article is reproduced with permission and was first published on August 23, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Jane Qiu

Recent Articles

 Strange Pumping Effect above Asia Threatens the Ozone Layer


 Trouble in the Bamboo after Pandas Dropped from Endangered List

 Glacial Lakes Threaten Himalayan Dams

Nature magazine

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs
 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Biology

How Cats Conquered the World (and


a Few Viking Ships)
A large-scale study of ancient feline DNA charts the domestication and global spread of house cats

 By Ewen Callaway, Nature magazine on September 20, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
A mummified cat. Credit: JUSTIN ENNIS Flickr
Advertisement |
Report Ad

Thousands of years before cats came to dominate Internet culture, they swept through ancient Eurasia and Africa
carried by early farmers, ancient mariners and even Vikings, finds the first large-scale look at ancient-cat DNA.

The study, presented at a conference on September 15, sequenced DNA from more than 200 cats that lived between
about 15,000 years ago and the eighteenth century A.D.

Researchers know little about cat domestication, and there is active debate over whether the house cat (Felis
silvestris) is truly a domestic animal—that is, its behaviour and anatomy are clearly distinct from those of wild
relatives. “We don’t know the history of ancient cats. We do not know their origin, we don't know how their dispersal
occurred,” says Eva-Maria Geigl, an evolutionary geneticist at the Institut Jacques Monod in Paris. She presented the
study at the 7thInternational Symposium on Biomolecular Archaeology in Oxford, UK, along with colleagues Claudio
Ottoni and Thierry Grange.

A 9,500-year-old human burial from Cyprus also contained the remains of a cat. This suggests that the affiliation
between people and felines dates at least as far back as the dawn of agriculture, which occurred in the nearby Fertile
Crescent beginning around 12,000 years ago. Ancient Egyptians may have tamed wild cats some 6,000 years ago, and
under later Egyptian dynasties, cats were mummified by the million. One of the few previous studies of ancient-cat
genetics involved mitochondrial DNA (which, contrary to most nuclear DNA, is inherited through the maternal line
only) for just three mummified Egyptian cats.

Feline travels
Geigl’s team built on those insights, but expanded the approach to a much larger scale. The researchers analysed
mitochondrial DNA from the remains of 209 cats from more than 30 archaeological sites across Europe, the Middle
East and Africa. The samples dated from the Mesolithic—the period just before the advent of agriculture, when
humans lived as hunter–gatherers—up to the eighteenth century.
Cat populations seem to have grown in two waves, the authors found. Middle Eastern wild cats with a particular
mitochondrial lineage expanded with early farming communities to the eastern Mediterranean. Geigl suggests that
grain stockpiles associated with these early farming communities attracted rodents, which in turn drew wild cats.
After seeing the benefit of having cats around, humans might have begun to tame these cats.

Thousands of years later, cats descended from those in Egypt spread rapidly around Eurasia and Africa. A
mitochondrial lineage common in Egyptian cat mummies from the end of the fourth century B.C. to the fourth
century A.D. was also carried by cats in Bulgaria, Turkey and sub-Saharan Africa from around the same time. Sea-
faring people probably kept cats to keep rodents in check, says Geigl, whose team also found cat remains with this
maternal DNA lineage at a Viking site dating to between the eighth and eleventh century A.D. in northern Germany.

“There are so many interesting observations” in the study, says Pontus Skoglund, a population geneticist at Harvard
Medical School in Boston, Massachusetts. “I didn’t even know there were Viking cats.” He was also impressed by the
fact that Geigl’s team was able to discern real population shifts from mitochondrial DNA, which traces only a single
maternal lineage. Nonetheless, Skoglund thinks that nuclear DNA—which provides information about more of an
individual's ancestors—could address lingering questions about cat domestication and spread, such as their
relationship to wild cats, with which they still interbreed.

Geigl’s team also analysed nuclear DNA sequences known to give tabby cats blotched coats, and found that the
mutation responsible did not appear until the Medieval period. She hopes to sequence more nuclear DNA from
ancient cats. But funding for modern cat genomics is scarce, which is one reason why it lags far behind such research
on dogs. By contrast, a team charting dog domestication announced at the Oxford meeting that it is preparing to
sequence nuclear DNA from more than 1,000 ancient dogs and wolves.

Geigl disputed this reporter’s insinuation that dogs seem to be more popular among researchers than cats. “We can do
it, too,” she says. “We just need money.”

This article is reproduced with permission and was first published on September 20, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Ewen Callaway

Ewen Callaway works for Nature magazine.

Recent Articles

 Ale Genomics: How Humans Tamed Beer Yeast


 Plant and Animal DNA Suggests First Americans Took the Coastal Route

 Dolly at 20: The Inside Story on the World's Most Famous Sheep

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Biology

Human Skeleton Found on Famed


Antikythera Shipwreck
Two-thousand-year-old bones could yield first DNA from an ancient shipwreck victim

 By Jo Marchant, Nature magazine on September 19, 2016


 Véalo en español


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

Divers examine human bones excavated from the Antikythera shipwreck. Credit: Brett Seymour, EUA/WHOI/ARGO
Advertisement |
Report Ad

Hannes Schroeder snaps on two pairs of blue latex gloves, then wipes his hands with a solution of bleach. In front of
him is a large Tupperware box full of plastic bags that each contain sea water and a piece of red-stained bone. He lifts
one out and inspects its contents as several archaeologists hover behind, waiting for his verdict. They’re hoping he can
pull off a feat never attempted before—DNA analysis on someone who has been under the sea for 2,000 years.

Through the window, sunlight sparkles on cobalt water. The researchers are on the tiny Greek island of Antikythera, a
10-minute boat ride from the wreckage of a 2,000-year-old merchant ship. Discovered by sponge divers in 1900, the
wreck was the first ever investigated by archaeologists. Its most famous bounty to date has been a surprisingly
sophisticated clockwork device that modelled the motions of the Sun, Moon and planets in the sky—
dubbed the ‘Antikythera mechanism’.
But on August 31 this year, investigators made another groundbreaking discovery: a human skeleton, buried under
around half a metre of pottery sherds and sand. “We’re thrilled,” says Brendan Foley, an underwater archaeologist at
Woods Hole Oceanographic Institution in Massachusetts, and co-director of the excavations team. “We don’t know of
anything else like it.”
Diving Operations Manager Phillip Short inspects amphoras on the Credit: Brett Seymour, EUA/WHOI/ARGO

Within days of the find, Foley invited Schroeder, an expert in ancient-DNA analysis from the Natural History
Museum of Denmark in Copenhagen, to assess whether genetic material might be extracted from the bones. On his
way to Antikythera, Schroeder was doubtful. But as he removes the bones from their bags he is pleasantly surprised.
The material is a little chalky, but overall looks well preserved. “It doesn’t look like bone that’s 2,000 years old,” he
says. Then, sifting through several large pieces of skull, he finds both petrous bones—dense nuggets behind the ear
that preserve DNA better than other parts of the skeleton or the teeth. “It’s amazing you guys found that,” Schroeder
says. “If there’s any DNA, then from what we know, it’ll be there.”

Schroeder agrees to go ahead with DNA extraction when permission is granted by the Greek authorities. It would take
about a week to find out whether the sample contains any DNA, he says: then perhaps a couple of months to sequence
it and analyse the results.

For Schroeder, the discovery gives him the chance to push the boundaries of ancient-DNA studies. So far, most have
been conducted on samples from cold climates such as northern Europe. “I’ve been trying to push the application of
ancient DNA into environments where people don’t usually look for DNA,” he says. (He was part of a team that last
year published the first Mediterranean ancient genome, of a Neolithic individual from Spain.)

Foley and the archaeologists, meanwhile, are elated by the chance to learn more about the people on board the first-
century bc ship, which carried luxury items from the eastern Mediterranean, probably intended for wealthy buyers in
Rome.

Rare discovery
The skeleton discovery is a rare find, agrees Mark Dunkley, an underwater archaeologist from the London-based
heritage organization Historic England. Unless covered by sediment or otherwise protected, the bodies of shipwreck
victims are usually swept away and decay, or are eaten by fish. Complete skeletons have been recovered from younger
ships, such as the sixteenth-century English warship the Mary Rose and the seventeenth-century Vasa in Sweden.
Both sank in mud, close to port. But “the farther you go back, the rarer it is”, says Dunkley.

Only a handful of examples of human remains have been found on ancient wrecks, says archaeologist Dimitris
Kourkoumelis of the Greek Ephorate of Underwater Antiquities, who collaborates with Foley. They include a skull
found inside a Roman soldier’s helmet near Sardinia, and a skeleton reportedly discovered inside a sunken
sarcophagus near the Greek island of Syrna (although the bones disappeared before the find could be confirmed).

In fact, the best-documented example is the Antikythera wreck itself: scattered bones were found by the French
marine explorer Jacques Cousteau, who excavated here in 1976. Argyro Nafplioti, an osteoarchaeologist at the
University of Cambridge, UK, concluded that the remains came from at least four individuals, including a young man,
a woman and a teenager of unknown sex.

At the wreck site, only broken pots now remain on the sea floor—the sponge divers recovered all artefacts visible on
the seabed in 1900–01. But Foley thinks that much of the ship’s cargo may be buried under the sediment. His team,
including expert technical divers and members of the Greek archaeological service, relocated and mapped the 50-
metre-deep site before beginning their own excavations in 2014. They have found items such as wine jars, glassware,
two bronze spears from statues, gold jewellery and table jugs used by the crew. The divers have also recovered ship
components including enormous anchors and a teardrop-shaped lead weight, found in June, that may be the first
known example of what ancient texts describe as a ‘war dolphin’—a defensive weapon carried by merchant vessels to
smash hostile ships.

The skeleton uncovered in August consists of a partial skull with three teeth, two arm bones, several rib pieces and
two femurs, all apparently from the same person. Foley’s team plans further excavations to see whether more bones
are still under the sand.

That so many individuals have been found at Antikythera—when most wrecks yield none—may be partly because few
other wrecks have been as exhaustively investigated. But the researchers think it also reveals something about how
the ship sank. This was a huge vessel for its time, perhaps more than 40 metres long, says Foley, with multiple decks
and many people on board. The wreck is close to shore, at the foot of the island’s steep cliffs. He concludes that a
storm smashed the ship against the rocks so that it broke up and sank before people had a chance to react. “We think
it was such a violent wrecking event, people got trapped below decks.”

Mediterranean mystery
The individuals found at Antikythera could be from the crew, which would probably have consisted of 15–20 people
on a ship this size. Greek and Roman merchant ships also commonly carried passengers, and sometimes slaves. One
reason people get trapped inside shipwrecks is if they are chained, points out Dunkley. “The crew would be able to get
off relatively fast. Those shackled would have no opportunity to escape.” Intriguingly, the recently discovered bones
were surrounded by corroded iron objects, so far unidentified; the iron oxide has stained the bones amber red.
Excavations in 2016 at the Antikythera Shipwreck produced a nearly intact skull, including the cranial parietal bones.
Credit: Brett Seymour, EUA/WHOI/ARGO

Schroeder says that because ancient underwater remains are so rare, DNA analysis on such samples using state-of-
the-art techniques has barely been tried. (Analyses were conducted on skeletons from the Mary Rose and the Vasa,
but specialists no longer see those methods—based on amplifying DNA using a method called PCR—as reliable,
because it is too difficult to distinguish ancient DNA from modern contamination.) Exceptions include analyses on
8,000-year-old wheat from a submerged site off the English coast (although these results have been questioned
because the DNA did not show the expected age-related damage), and mitochondrial DNA from a 12,000-year-old
skeleton found in a freshwater sinkhole in Mexico.

Finding undisturbed remains such as those at Antikythera is crucial because it offers the opportunity to extract any
DNA in the best possible condition. Previously salvaged bones are not ideal for analysis because they have often been
washed, treated with conservation materials or kept in warm conditions (all of which can destroy fragile DNA), or
handled in a way that contaminates them.

Schroeder guesses from the skeleton’s fairly robust femur and unworn teeth that the individual was a young man. As
well as confirming the person’s gender, DNA from the Antikythera bones could provide information about
characteristics from hair and eye colour to ancestry and geographic origin. In the past few years, modern genome
sequences have revealed that genetic variation in populations mirrors geography, says Schroeder. He and others are
now starting to look at how ancient individuals fit on that map, to reconstruct past population movements. Would the
shipwreck victim look more Greek-Italian or Near Eastern, he wonders?

Over dinner, the researchers decide to nickname the bones’ owner Pamphilos, after a name found neatly scratched on
a wine cup from the wreck. “Your mind starts spinning,” says Schroeder. “Who were those people who crossed the
Mediterranean 2,000 years ago? Maybe one of them was the astronomer who owned the mechanism.”

This article is reproduced with permission and was first published on September 19, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Jo Marchant

Recent Articles

 Placebo Effect Grows in U.S., Thwarting Development of Painkillers


 Mystery of Darwin's "Strange Animals" Solved

 How Happiness Boosts the Immune System

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search
 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

Icy Heart Could Be Key to Pluto’s


Strange Geology
NASA¹s New Horizons mission plumbs complex interplay between the dwarf planet's surface and its sky

 By Alexandra Witze, Nature magazine on October 24, 2016


 Véalo en español


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

Pluto’s atmosphere contains carbon and nitrogen compounds. Credit: NASA, JHUAPL, SwRI
Advertisement |
Report Ad

Pluto’s icy heart beats with a planetary rhythm.

When NASA’s New Horizons spacecraft whizzed by the dwarf planet in July 2015, it famously spotted a heart-shaped
feature just north of the equator. Now, researchers are recognizing how that enormous ice cap drives much of Pluto’s
activity, from its frosty surface to its hazy atmosphere.

Planetary scientists revealed their latest insights this week at a joint meeting of the American Astronomical Society’s
Division for Planetary Sciences and the European Planetary Science Congress in Pasadena, California. Many of those
discoveries revolve around Sputnik Planitia, the icy expanse that makes up the left lobe of Pluto’s ‘heart’. “All roads
lead to Sputnik,” says William McKinnon, a planetary scientist at Washington University in St. Louis, Missouri.
Researchers already knew that Sputnik Planitia (formerly dubbed Sputnik Planum) is made mostly of nitrogen
ice, churning and flowing in massive glaciers. But its sheer size—1,000 kilometres across and at least several
kilometres deep—means that it exerts extraordinary influence over the dwarf planet’s behaviour.

The heart may have even knocked Pluto on its side. At the meeting, James Tuttle Keane of the University of Arizona in
Tucson showed how the feature’s formation could have altered Pluto’s tilt. Sputnik Planitia may be a crater punched
by a giant meteorite impact, which later filled with ice. The sheer mass of all that ice caused the dwarf planet to rotate
relative to its spin axis, Keane says, so that Sputnik Planitia ended up permanently facing away from Pluto’s biggest
moon, Charon. “Pluto followed its heart,” he says. (Other scientists, such as Douglas Hamilton of the University of
Maryland in College Park, have suggested that Sputnik Planitia might have accumulated ice without an impact, and
that the hole instead comes from the sheer weight of the ice depressing the ground beneath it.)

The enormous reservoir of Sputnik Planitia also feeds Pluto’s complicated atmosphere. Volatile chemicals such as
nitrogen, methane and carbon monoxide start out as ices on the surface, often within Sputnik, then sublimate into the
air when temperatures warm. As the atmosphere cools, the volatile gases condense and fall back to the surface,
coating it with a fresh layer of frost. Pluto is currently moving away from the Sun and so temperatures are growing
colder.

A clearer view
New Horizons, which analysed light passing through Pluto’s thin atmosphere, showed just how complicated the
interplay between the surface and the atmosphere is, says Leslie Young, a planetary scientist at the Southwest
Research Institute in Boulder, Colorado. As dawn breaks over Sputnik Planitia, sunlight warms the icy plain and
allows a pulse of nitrogen to waft upwards. “I think of this piston of cold air being pushed into the bottom of the
atmosphere every day and then dropping back down,” says Young.

New data also reveal how the seasonal frosts behave on the surface. Silvia Protopapa, a planetary scientist at the
University of Maryland, showed maps of how methane and nitrogen are distributed across Pluto’s surface, as seen by
an infrared-sensing instrument on New Horizons. The ices typically co-exist in a mix in which one substance or the
other dominates.

In Sputnik Planitia, temperatures and sunlight combine to create an environment where nitrogen rules. Farther
north, above about 55 degrees latitude, constant summer sunlight seems to have stripped most of the nitrogen away,
leaving behind plains of methane ice at Pluto's north pole. “We’ve had continuous illumination northward for the past
20 years,” says Protopapa. The work is to appear in the journal Icarus.

Sputnik Planitia’s influence reaches the highest levels of Pluto’s atmosphere. Its volatile gases drift upwards, and
photochemical reactions create new carbon and nitrogen compounds. These form layered hazes that extend more
than 200 kilometres above the surface. That’s higher than researchers would have predicted, because temperatures at
this level are too warm for particles to condense out directly. Instead, dust raining in from interplanetary space might
serve as nuclei around which haze particles can form, says Andrew Cheng, a planetary scientist at the Johns Hopkins
University Applied Physics Laboratory in Laurel, Maryland.

Haze particles then begin to clump together, growing bigger and more rounded the lower they drift in the
atmosphere, Cheng says. Eventually they settle out on to Pluto’s surface, and coat it afresh until warming begins and
they are once again lofted.

New Horizons has slowly been trickling data back to Earth since its record-setting fly-by, and final observations from
its encounter are due to be returned on the night of 22–23 October. They will show pictures of the dark vastness of
space surrounding Pluto—photographed just in case a tiny unknown moon or other cosmic discovery still happens to
be lurking in the New Horizons data.

This article is reproduced with permission and was first published on October 21, 2016.

Advertisement |
Report Ad
ABOUT THE AUTHOR(S)
Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Jupiter Mission's Computer Glitch Delays Data-Gathering


 Incoming! Space Rocks Strike the Moon More Than Expected

 NASA Considers a New Approach to Mars Exploration

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space
Incoming! Space Rocks Strike the
Moon More Than Expected
Craters from hundreds of recent meteorite impacts are bad omens for future lunar bases

 By Alexandra Witze, Nature magazine on October 13, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
A new lunar crater, formed about three years ago. Credit: NASA, GSFC, Arizona State University
Advertisement |
Report Ad

Meteorites have punched at least 222 impact craters into the Moon's surface in the past 7 years. That’s 33% more than
researchers expected, and suggests that future lunar astronauts may need to hunker down against incoming space
rocks.

“It's just something that's happening all the time,” says Emerson Speyerer, an engineer at Arizona State University in
Tempe and author of a October 12 paper in Nature.

Planetary geologists will also need to rethink their understanding of the age of the lunar surface, which depends on
counting craters and estimating how long the terrain has been pummelled by impacts.

Although most of the craters dotting the Moon's surface formed millions of years ago, space rocks and debris continue
to create fresh pockmarks. In 2011, a team led by Ingrid Daubar of NASA’s Jet Propulsion Laboratory in Pasadena,
California, compared some of the first pictures taken by NASA’s Lunar Reconnaissance Orbiter (LRO), which
launched in 2009, with decades-old images taken by the Apollo astronauts. The scientists spotted five fresh impact
craters in the LRO images. Then, on two separate occasions in 2013, other astronomers using telescopes on Earth
spotted bright flashes on the Moon; LRO later flew over those locations and photographed the freshly formed craters.

Forever young
LRO has taken about a million high-resolution images of the lunar surface, but only a fraction cover the same portion
of terrain under the same lighting conditions at two different times. Speyerer’s team used a computer program to
automatically analyse 14,092 of these paired images, looking for changes between the two. The 222 newfound craters
are distributed randomly across the lunar surface, and range between 2 and 43 metres in diameter.

There are more fresh craters measuring at least 10 metres across than standard cratering calculations would suggest.
This could mean that some young lunar surfaces may be even younger than thought, says Daubar. She calls the work
“a significant advance in the field of crater chronology”, noting that it can even be used to compare cratering rates on
the Moon and Mars.

Meteorites can churn up the lunar surface in several ways. Along with the fresh craters, Speyerer's team found more
than 47,000 ‘splotches’, formed when material gets kicked up by the main impact and rains down—sometimes tens of
kilometres away.

And that means a bigger risk for any future lunar habitats, says Stephanie Werner, a planetary geologist at the
University of Oslo. The chances of a lunar base being nailed by a direct meteorite hit are relatively small, but the
splattered material could pose a hazard. Werner is part of a team that has proposed a combined orbiter–lander
mission to the European Space Agency, which would study impact flashes at the Moon and quantify the risk.

This article is reproduced with permission and was first published on October 12, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Icy Heart Could Be Key to Pluto's Strange Geology


 Jupiter Mission's Computer Glitch Delays Data-Gathering

 NASA Considers a New Approach to Mars Exploration

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

 RSS

Space

Jupiter Mission’s Computer Glitch


Delays Data-Gathering
Juno probe goes into safe mode hours before second flyby of the giant planet
 By Alexandra Witze, Nature magazine on October 20, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

A composite image taken with JunoCam shows Jupiter on the spacecraft's August 27 fly-by, or perijove. Credit:
NASA/JPL-Caltech/SwRI/MSSS
Advertisement |
Report Ad

NASA’s Juno spacecraft put itself into a temporary shutdown at 10:47 p.m. US Pacific Daylight time on October 18 as
it approached a fly-by of Jupiter. It was the mission's second glitch in a week, following a problem with its propellant
system.
Juno remains safe and is looping around Jupiter on a 53.5-day elliptical orbit. But the spacecraft did not gather
scientific data as it whizzed 5,000 kilometres above the giant planet’s cloudtops on this, its second close pass since
arriving at Jupiter on July 4.

“We’ll just hang out for a couple of days while we figure out what went wrong,” says Scott Bolton, a planetary scientist
at the Southwest Research Institute in San Antonio, Texas, and the mission’s principal investigator.

Delayed burn
Juno slipped into “safe mode”, possibly in response to an onboard computer reboot, a little over 13 hours from its
closest approach to Jupiter. The mission has been in safe mode several times since its 2011 launch; operations are
typically restored within hours to days. Engineers are working through a series of steps to restore communications. If
and when they start talking to Juno again, they will turn towards resolving a separate, apparently unrelated
propellant issue.

On October 14, NASA announced that Juno would delay burning its engines as it had planned to during the October
19 close fly-by, or perijove. The engine burn would have nudged the craft from its 53.5-day orbit to a 14-day orbit. But
two helium valves needed for the procedure did not respond as expected while being pressurized in the lead-up to the
burn. Mission managers decided to put it off, hastily scheduled a series of science observations for the upcoming
perijove—and then, four days later, saw their spacecraft enter safe mode.

Juno can stay in its 53.5-day orbit indefinitely and still get nearly all of the science it had been planning to gather at
Jupiter, Bolton says, including unraveling the mysteries of the planet's origin and whether or not it has a core. The
science discoveries come mostly at each close fly-by, so stretching out the time between each perijove means that
researchers gather data more slowly.

An early look
Despite Juno's current issues, there was a spot of good news. Bolton presented early results from Juno’s first flyby of
Jupiter—on August 27—at a joint meeting of the American Astronomical Society’s Division for Planetary Sciences and
the European Planetary Science Congress. The data included one of the best looks yet deep into Jupiter’s swirling
clouds.

A microwave instrument on Juno has found that Jupiter’s wide atmospheric bands extend as much as 400 kilometres
deep into the gas giant—though the bands display new twists and turns the deeper they go. “Deep down Jupiter is
similar but also very different from what we see on the surface,” Bolton says.

Juno’s camera has also captured new visual details on the storms, like the famous Great Red Spot, that rage across
Jupiter. Unlike other spacecraft that have visited the giant planet, Juno is whizzing up and over the planet’s poles,
giving researchers the first-ever view of the northern and southern extremes. The spacecraft's first fly-by found
that Jupiter’s north pole lacks the mysterious hexagon of swirling clouds that dominate Saturn's north pole.

Another new image shows a towering cyclone, its clouds illuminated from the side as the sun rises on Jupiter. At
7,000 kilometres across and 100 kilometres tall, “it is a truly towering beast of a storm,” Bolton says.

Other data, not yet made public, includes information on Jupiter’s powerful magnetic and gravity fields, as well as its
shimmering auroras. “Every dataset has a discovery aspect in it that we’re in the middle of trying to understand,” says
Bolton.

The next fly-by is scheduled for December 11.

This article is reproduced with permission and was first published on October 20, 2016.

Advertisement |
Report Ad
ABOUT THE AUTHOR(S)
Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Icy Heart Could Be Key to Pluto's Strange Geology


 Incoming! Space Rocks Strike the Moon More Than Expected

 NASA Considers a New Approach to Mars Exploration

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Advertisement |
Report Ad

Latest News
Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In
 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

Kepler Finds Scores of Planets


around Cool Dwarf Stars
NASA’s rebooted mission, K2, seeks out new worlds closely orbiting stars smaller than the sun

 By Ramin Skibba, Nature magazine on October 24, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Credit: T PYLE, NASA Ames, JPL-Caltech
Advertisement |
Report Ad

NASA’s Kepler observatory has spotted 20 planets that orbit cool, small stars—the largest such haul so far. These
long-lived stars, known as K and M dwarfs, are ubiquitous in the Milky Way and could turn out to host numerous
habitable planets.

After the Kepler spacecraft experienced a mechanical failure in 2013 that made it impossible for it to keep observing
its original targets, astronomers gave it a new mission, called K2. It now uses pressure from sunlight to help stabilize
the craft. The latest observations with K2 revealed 87 planet candidates, on top of 667 previously announced
candidates, almost all with sizes between those of Mars and Neptune.

Although the original Kepler mission examined many Sun-like stars, the majority of stars in our Galaxy are smaller,
fainter, cooler stars, known as red dwarfs. Such stars make up nearly half the targets of the K2 mission. “There are
more than 250 of them within 30 light-years—all over the place—which is why some other astronomers here might
call them the vermin of the sky,” says Courtney Dressing, an astrophysicist at the California Institute of Technology in
Pasadena who presented the research at a joint meeting of the American Astronomical Society's Division for Planetary
Sciences and the European Planetary Science Congress in Pasadena on October 19.

“Since these stars are the most common ones in the Galaxy, they help us learn how common life might be,” says
Victoria Meadows, an astronomer at the University of Washington in Seattle.

Of the confirmed planets, 63 are smaller than Neptune, and a few could be even smaller than Earth. But these small
candidates remain to be confirmed. Dressing believes that these are probably “false positives” caused by other
phenomena such as cosmic rays or an instrumental glitch.

Five of the confirmed planet candidates are in or near their star’s ‘habitable zone’, the region that’s neither too close to
the star, nor too far from it, for life to arise. In our Solar System, the zone is roughly between the orbits of Venus and
Mars.
Red dwarf stars give off less energy than larger, hotter stars, so their planets’ habitable zones are closer in, often closer
to their star than Mercury is to the Sun. Such planets transit frequently, some orbiting their star within just a few
weeks, making it easier to use Kepler’s instruments to detect the tell-tale dimming of stellar light.

The focus on red dwarfs stems partly from the K2 mission’s constraints, which allow the astronomers less then three
months to observe stars in its field of viewbefore having to rotate the craft. Moving from field to field poses a
challenge, but it also gives the team an opportunity to investigate more objects. “It’s fun to study a new set of stars
every 80 days,” Dressing says.

Dressing’s research also paves the way for more sensitive future missions designed to look for Earth-sized planets,
says Christa van Laerhoven, a planetary scientist at the Canadian Institute for Theoretical Astrophysics in Toronto.
Such missions include NASA’s Transiting Exoplanet Survey Satellite, scheduled to launch in December next year.

This article is reproduced with permission and was first published on October 21, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Ramin Skibba

Ramin Skibba is a science writer based in San Diego and Santa Cruz, California. He can be found at raminskibba.net
and on Twitter at @raminskibba.

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 The Polling Crisis: How to Tell What People Really Think

 OSIRIS-REx Spacecraft Blazes Trail for Asteroid Miners

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

NASA Considers a New Approach to


Mars Exploration
With fewer future missions planned, the agency is rethinking how scientists will compete for access

 By Alexandra Witze, Nature magazine on October 7, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
As international interest in Mars grows, NASA's current missions to the Red Planet are winding down. Credit: NASA,
JPL
Advertisement |
Report Ad

NASA is looking at a new way of studying Mars.

Starting in the 2020s, scientists who participate in the agency's Mars missions might no longer design and build their
own highly specialized payloads to explore the red planet. Instead, planetary scientists could find themselves
operating much as astronomers who use large telescopes do now: applying for time to use a spacecraft built with a
generic suite of scientific instruments.

The proposed change is spurred by NASA's waning influence at Mars. The agency's long-running string of spacecraft
is winding to a close, and international and commercial interests are on the rise. By the middle of the next
decade, European, Chinese, Emirati and SpaceX missions are as likely to be at Mars as NASA is.

Jim Watzin, head of NASA’s Mars exploration programme in Washington DC, suggested the new approach to the red
planet on 6 October at a virtual meeting of a Mars advisory group. “The era that we all know and love and embrace is
really coming to an end,” he said. “It’s important to recognize that the future is not going to be the same as the past.”

Throughout the 2000s NASA sent a sustained barrage of spacecraft to Mars, unique in the sheer number of robots
directed at one planetary target. But many have expired, and the ones still operating are growing old. NASA's three
functional orbiters—Mars Odyssey, Mars Reconnaissance Orbiter, and MAVEN—launched in 2001, 2005, and 2013
respectively. The Opportunity rover is in its thirteenth year, and the Curiosity rover is in its fifth.

Perhaps more significantly, NASA has only one more spacecraft scheduled in its Mars programme, a rover to launch
in 2020 that is tasked with gathering samples for an as-yet-unscheduled return to Earth. (The InSight geophysics
probe, slated for a 2018 launch, was not developed under the auspices of NASA's Mars programme.)

All eyes on Mars


NASA wants to start planning for an orbiting mission sometime after 2020. In June, the agency asked five companies
for information about what sorts of Mars orbiters they might be able to build, and how quickly and cheaply that could
be done. Five international partners have also said they would like to be involved, Watzin said.

Many non-NASA missions to Mars are already on the books. In 2020, the European Space Agency and China each
plan to launch Mars rovers, while the United Arab Emirates will send an orbiter. SpaceX of Hawthorne, California,
announced last month that it hoped to send its first Red Dragon landers to Mars starting in 2018.

This broadening context prompted Watzin to propose the new way of operating Mars missions. “I’m not trying to fix
something that’s broken,” he said. “I’m trying to open the door to a larger level of collaboration and participation than
we have today, looking to the fact that we’re going to have a larger pool of stakeholders involved in our missions.”

Under the new, facility-based approach, scientists would propose investigations using one or more instruments on a
future spacecraft. NASA would award observing time to specific proposals, much as telescope allocation committees
parcel out time on their mountaintops. This would be different than the current approach where instruments are
proposed, built and operated by individual teams of scientists.

Alfred McEwen, a planetary scientist at the University of Arizona in Tucson, noted one possible model. The Mars
Reconnaissance Orbiter's HiRISE camera, for which he is principal investigator, has taken thousands of images of
Mars based on public requests.

"We've managed to do all the things he described already without a new paradigm," McEwen says. "We have
distributed operations, we have multiple customers, we have a foreign contributed instrument. So my immediate
reaction to this idea was not very positive."

This article is reproduced with permission and was first published on October 6, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Icy Heart Could Be Key to Pluto's Strange Geology


 Jupiter Mission's Computer Glitch Delays Data-Gathering

 Incoming! Space Rocks Strike the Moon More Than Expected

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search
 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

New Billion-Star Map Reveals Secrets


of the Milky Way
The first results from the Gaia mission are poised to rewrite astronomy textbooks, starting with an upgrade to the size
of our galaxy

 By Davide Castelvecchi, Nature magazine on September 14, 2016


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

The Milky Way and its neighbouring galaxies are shown in this map based on Gaia satellite data: brighter regions
indicate denser concentrations of stars. Credit: ESA, Gaia, DPAC
Advertisement |
Report Ad

The European Space Agency (ESA) has released the largest, most detailed map of the Milky Way, pinpointing the 3D
positions of 1.1 billion stars, 400 million of which were previously unknown to science.

ESA’s Gaia space observatory mapped out the catalogue. It is expected to transform what astronomers know about
the Galaxy—allowing researchers to discover new extrasolar planets, examine the distribution of dark matter, and
fine-tune models of how stars evolve.

Hundreds of astronomers began to access the database as soon as it was made publicly available on September 14,
says Gaia project scientist Timo Prusti, who works at ESA's European Space Research and Technology Centre in
Noordwijk, the Netherlands. “My advice to the astronomical community is: please enjoy with us,” he said at a press
conference in Madrid.

Gaia has already found more stars than researchers expected, which suggests that the Milky Way may be slightly
bigger than previously estimated, says Gisella Clementini, a Gaia researcher at the Bologna Astronomical Observatory
in Italy.

But few new results were announced at the catalogue’s unveiling, as Gaia’s team were only allowed to do limited
analyses before the data release—contrary to the norm for space observatories, where mission scientists often have up
to a year’s exclusive use of their data before sharing them with the world.

One notable result, however, is a new measurement of the distance of the Pleiades, a cluster of stars in the
constellation Taurus that has been the subject of a long-running controversy. Where numerous measurements put the
Pleiades cluster at a distance of about 135 parsecs (440 light years) from the Sun, Gaia’s predecessor, the Hipparcos
mission, found it to be about 15 parsecs closer.

Gaia measured 134 parsecs, give or take 6 parsecs—suggesting that the Hipparcos findings were inaccurate. Anthony
Brown, an astronomer at the Leiden Observatory in the Netherlands who chairs Gaia’s data-processing collaboration,
stresses that the results are still preliminary and that they could change once Gaia collects more data. (Ultimately,
Gaia should be able to measure the distances of individual stars in the cluster for the first time, rather than an
average.)

But there’s scant possibility that Gaia’s results will be corrected so much that they agree with the Hipparcos results,
thinks David Soderblom, an astronomer at the Space Telescope Science Institute in Baltimore, Maryland. “It’s not
impossible but it sure isn’t very likely at this point,” he says. “That to me is basically the answer.” Soderblom expects
that the trouble with the Hipparcos measurement may have been in corrections made to account for the unusual
brightness of stars in the cluster.

Gaia launched in late 2013 and started its scientific mission in July 2014. The preliminary catalogue released today is
based on its first 14 months of data-taking. Gaia does not take still exposures in the way of ordinary telescope
cameras. Instead, it constantly spins on its axis every six hours, watching stars leave streaks along its 1-gigapixel
detector.

By comparing scans of the sky taken six months apart, researchers are able to triangulate and measure stars'
distances, using a method known as parallax that dates back to ancient Greece. For more than two million stars, the
catalogue also includes accurate measurement of the stars’ distances from the Sun and their motion, obtained by
comparing Gaia data with Hipparcos’s. In future releases, the catalogue will grow to include the distances and
velocities of more than a billion stars.

With more years of observation, Gaia’s measurements will become so accurate that the distances of many of the
galaxy’s stars will be pinpointed to within 1%.

“What Gaia is going to do is going to be phenomenal,” says Wendy Freedman, an astronomer at the University of
Chicago in Illinois. “It will be the fundamental go-to place for astronomers for decades to come.”

This article is reproduced with permission and was first published on September 14, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Davide Castelvecchi


Davide Castelvecchi is a senior reporter at Nature in London covering physics, astronomy, mathematics and computer
science.

Recent Articles

 Universe Has 10 Times More Galaxies Than Researchers Thought


 Can We Open the Black Box of AI?

 Deep Learning Boosts Google Translate Tool

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube
 RSS

Conservation

No Safe Haven for Polar Bears in


Warming Arctic
Sea-ice retreat is affecting every one of the endangered species’ refuges

 By Quirin Schiermeier, Nature magazine on September 15, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Credit: FLICKR
Advertisement |
Report Ad

Not a single polar-bear haven in the rapidly warming Arctic is safe from the effects of climate change, researchers
have found.

Polar bears (Ursus maritimus) rely on sea ice for roaming, breeding, and as a platform from which to hunt seals.
When the ice melts in the summer, the bears spend several months on land, largely fasting, until the freeze-up allows
them to resume hunting. So if they are to survive, they need pockets of ice to persist almost year-round.

Some climate models suggest that most of the Arctic may be ice-free in summer by mid-century. But icy refuges near
the North Pole currently support 19 populations of polar bears, totalling some 25,000 individuals. Scientists weren’t
sure about the exact rate of ice retreat in these habitats, or whether some refuges might not yet be dwindling.

All of the Arctic refuges are in fact on the decline, a detailed examination of satellite data now suggests.
Mathematician Harry Stern and biologist Kristin Laidre at the University of Washington in Seattle used a 35-year
satellite record to examine each of the 19 population areas, which range from 53,000 to 281,000 square kilometres in
size. For each, they calculated the dates on which sea ice retreated in the Arctic spring and advanced in the autumn, as
well as the average summer sea-ice concentration and number of ice-covered days.

In all the refuges, the researchers found a trend towards sea-ice retreating earlier in spring and advancing later in
autumn. The time span between the sea-ice maximum in March and the sea-ice minimum in September has
lengthened by up to nine weeks since 1979 when satellite observations began, they report in The Cryosphere.

Under strain
The measurements show that polar bear habitats are all being put under strain, the researchers say. “The spring ice
break-up and fall ice advance roughly bound the duration of time polar bears have to feed, find mates and breed,”
Laidre says.
Dwindling ice conditions have been previously shown to affect polar bears’ abundance and health: for example, polar
bear metabolism doesn’t seem to slow much when sea ice melts and food becomes scarce, suggesting that the bears
don’t have a way to conserve energy to survive summer fasts.

Five Arctic range nations—the United States, Canada, Greenland, Norway and Russia—in 2015 adopted a ten-year
circumpolar action plan on polar-bear conservation. Using common measurements of habitat change for all polar-
bear refuges will guide that plan’s implementation and help to coordinate national conservation efforts, says Dag
Vongraven of the Norwegian Polar Research Institute in Tromsø, who co-chairs the polar-bear specialist group of the
International Union for Conservation of Nature.

This article is reproduced with permission and was first published on September 14, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Quirin Schiermeier

Quirin Schiermeier works for Nature magazine.

Recent Articles

 Solar on the Steppe: Ukraine Embraces the Renewables Revolution


 Deadly Italian Quake Strikes 40 Kilometers from L'Aquila

 Putin Appoints Church Historian as Science Minister

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In
 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Evolution

Tiny Pterosaur Claims New Perch on


Reptile Family Tree
Fossils find suggests that cat-size reptile lived alongside birds and larger pterosaurs

 By Ramin Skibba, Nature magazine on September 1, 2016


 Véalo en español

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Pterosaur bones found in western Canada came from a specimen with a wingspan of just 1.5 meters. Credit: Mark
Witton
Advertisement |
Report Ad

When dinosaurs roamed Earth, pterosaurs ruled the skies. The largest of these ancient reptiles had wingspans of 10
metres or more. But fossil fragments unearthed in western Canada suggest that these giant flying reptiles co-existed
with a more diminutive form — closer to the size of an albatross.

The finding is preliminary, but if it holds, it could upend scientists’ view of pterosaurs’ evolution, and their eventual
extinction 66 million years ago.

The fossils — an upper arm bone and vertebrae discovered on Hornby Island in British Columbia — came from a
nearly full grown pterosaur that had a wingspan of just 1.5 metres and was about as tall as a housecat, scientists
report on 31 August in Royal Society Open Science1. They suggest the existence of a species about 77 million years
ago, during the late Cretaceous period, that was much smaller than the giant pterosaurs thought to dominate then.

“It’s quite different from other animals we’ve studied. There hasn’t really been evidence before of small pterosaurs at
this time period,” says Elizabeth Martin-Silverstone, the study’s lead author and a palaeobiologist at the University of
Southampton, UK.

At arm’s length
She and her colleagues examined a thin slice of the arm bone — a humerus — and analysed it on a microscopic level,
looking at how the bone had maintained and reworked itself to get an idea of the animal’s growth stage. They also
found that the vertebrae were beginning to fuse together. Together, these demonstrated that it was nearly a full-grown
adult when it died.
The pterosaur was about the size of a housecat. Credit: Mark Witton

Pterosaurs’ bones were hollow, with thin walls, so relatively few have survived as fossils. And small pterosaurs are
particularly tough to identify, which means that the fossils that have been found give a limited picture of the original
diversity of the animals. But in recent years, scientists have discovered specimens that suggest pterosaurs grew larger
as they evolved. The biggest yet known was the size of a small plane — and lived during the late Cretaceous period; the
smallest thought to be living then had wingspans of roughly 2.5 metres.

The latest study relies on only a few bones, so it does not provide definitive proof that small pterosaur species existed
alongside the larger ones, says Alexander Kellner, a palaeontologist at the National Museum of Brazil in Rio de
Janeiro.

“I praise the authors for their efforts, but the specimen is not very complete,” he says. “If they had a skull, jaw or neck
bones, that would help. The classification? I don’t know. It could be anything.”

Identity crisis
Study co-author Mark Witton, a palaeontologist at the University of Portsmouth, UK, acknowledges the work’s
limitations. “We’ve only got one data point, so don’t rewrite the textbooks yet,” he says.

But he and his colleagues say that they carefully ruled out alternative explanations for the small size of the fossilized
bones. The fused backbone means that the bones did not come from a bird. And it could not be a nyctosaur, a
previously known small marine pterosaur, because the arm bone lacked that creature’s distinctive hatchet-shaped
crest, where the flight muscles attach, says Martin-Silverstone.

The Cretaceous ended 66 million years ago with a mass extinction that saw pterosaurs vanish alongside the dinosaurs.

In general, the extinction wiped out bigger species, while smaller animals like many birds managed to muddle
through and survive. If the latest finding is confirmed, it will turn out that birds were not the only small-winged
vertebrates living then, although the tiny pterosaur’s unfortunate fate would imply that being little was no guarantee
of survival.

“They have plenty of new material to determine that this is a new species of pterosaur,” says Michael Habib, a
palaeontologist at the University of Southern California in Los Angeles. “If there’s one, there were probably others.
Then we’d need to rethink what we previously thought about survivability of these little ones.”

This article is reproduced with permission and was first published on August 31, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Ramin Skibba

Ramin Skibba is a science writer based in San Diego and Santa Cruz, California. He can be found at raminskibba.net
and on Twitter at @raminskibba.

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 Kepler Finds Scores of Planets around Cool Dwarf Stars

 The Polling Crisis: How to Tell What People Really Think

Nature magazine
Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Mental Health

To Diagnose Mental Illness, Read the


Brain
Rather than relying on symptoms, scientists are developing a “brain circuits first” approach to mental health.
 By Sara Chodosh on June 25, 2016
 ‫أعرض هذا باللغة العربية‬

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

Credit: Getty Images/iStockphoto Thinkstock Images (MARS)


Advertisement |
Report Ad

Although scientists have learned a lot about the brain in the last few decades, approaches to treating mental illnesses
have not kept up. As neuroscientists learn more about brain circuits, Stanford psychiatrist Amit Etkin foresees a time
when diagnoses will be based on brain scans rather than symptoms. Etkin, who will be speaking at the World
Economic Forum’s Annual Meeting of the New Champions in Tianjin, China, from June 26 to 28, spoke with
Scientific American about his research on the neurological basis of emotional disorders and the future of mental
health treatment.

[An edited transcript of the interview follows.]

The high cost of treating mental illness doesn’t get talked about very much. Why is that?
It’s a really interesting issue. The costs associated with mental illness are not just the care of people who have an
illness, which often starts early in life and continues as a lifelong process, but also the cost to employers in decreased
productivity and the cost to society in general. A report that came out recently in Health Affairs showed that spending
within our health system in the U.S. is greater for mental illness than for any other area of medicine, and yet our
understanding of these illnesses is incredibly backwards. Treatments are no different than they were 40 years ago, so
that feels like a problem that is only getting bigger without an obvious solution.

Why hasn’t there been much progress?


It was really not until about 10 years ago that [mental health professionals] started realizing how little difference we
have made. There are a few fundamental issues and mistakes we’ve made. One is that in the absence of knowing what
the causes of the illnesses that we treat are, we focus on the symptoms, and that has already led us down the wrong
path. If you go to another country and you ask somebody to tell you their symptoms, as a clinician you might have the
sense that they have anxiety or depression. In Asian countries they express that in a somatic way: “I can’t sleep” or “I
feel weak.” The biology cannot be that different, but the symptoms are different because they’re culturally bound. If
you look at different parts of the U.S. you’ll see people expressing symptoms in different ways depending on their
local culture. If that’s the case, then a symptom-based definition is problematic. The long and short of it is that people
have named syndromes or disorders that they don’t actually know represent a valid entity that is distinct from
another entity.

What do you see as the path forward? How do we rectify those mistakes?
Realizing these errors has coincided with the era of imaging, and even more recently with the really exciting focus on
individual subject analyses: is there something about this particular person’s brain that allows me to predict
something? I call this the circuits first approach. We understand behavior is essentially underpinned by brain circuits.
That is, there are circuits in the brain that determine certain types of behaviors and certain types of thoughts and
feelings. That’s probably the most useful way of organizing brain function. If you can start characterizing circuit
disruptions for compensatory symptoms at an individual subject level and then link that to how you can provide
interventions, then you can get away completely from diagnoses and can intervene with brain function in a directed
way.

How close do you think we are to making diagnoses based on something more tangible than
symptoms?
In the selected pockets where we have the best data, within the next five to 10 years. And that’s really more a factor of
how much we need to show to get FDA approval and get into a commercialized product that people can use. Getting it
out of the lab, in other words.

In the lab you’re using transcranial magnetic stimulation (TMS), but is the consumer product
ultimately a pill?
That’s a really interesting question. There’s been this assumption that medications are either preferred or maybe the
best way to go about treating things, and that comes a bit from the history of psychiatry but also from the rest of
medicine. I’m not sure that a pill is necessarily going to be the best approach for psychiatry. Washing the brain in a
drug that affects many parts of the brain, and also affects many parts of the body, is a pretty crude and nonspecific
way to affect a very discrete part of the brain. In contrast, as our neurostimulation approaches have proved, we can
have a lot more specificity for our target. And if we set as our own bar that the treatment has to work within days or a
week, then we can actually achieve something powerful and quick to the point where I’m not sure there would be any
reason to take medication in many contexts if you can achieve it with stimulation. We’re not there yet for sure, but I
can definitely see it evolving that way and that’s what we in my lab are very excited about.

Have you gotten any feedback from people about potentially doing treatments where you’re directly
stimulating parts of the brain? Are people comfortable with it?
We had an individual with chronic PTSD who came to our study where our goal was to understand how
psychotherapy works for people with PTSD. With PTSD, really the only effective treatment is psychotherapy. There
are some medications but they don’t work all that well. Part of the study was simply mapping the brain with TMS
while measuring brain activity with fMRI. Nothing therapeutic at all. Psychotherapy is a very effective but very
challenging therapy. It involves talking about your trauma and regaining control over your experience. He said that if
there were any way he could have TMS instead he would strongly prefer that. Having to go through a difficult therapy
where you’re trying to gain control over a traumatic experience is not appealing to people even if you tell them that it’s
a very effective intervention. So having something that is less emotionally laden and more rational in its approach I
think is very appealing to people.

There’s been a backlash against taking medication and the scattershot way that we approach things, and that’s leading
people to look at other interventions that don’t have much evidence of efficacy behind them. That’s mostly out of
frustration and desperation. Patients wants to get better faster so they can go back to their lives and their families and
if you offer them an effective alternative I don’t see why that wouldn’t be very popular. In psychiatry in particular it
can be a real comfort because everything else can seem so subjective and having something that has a number, that
has an objective test, helps to validate the experience of the person. The person suffering from a mental illness knows
they’re suffering, but they can’t say, “my level is 17 of this thing in my blood,” and that hugely effects how society
views mental illness and even how patients themselves view their illnesses.

This interview was produced in conjunction with the World Economic Forum.

Rights & Permissions


Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Sara Chodosh

Sara Chodosh is a science journalist and editorial intern for Scientific American Mind who writes frequently about
neuroscience. Her work has also been featured in Undark and the Atlantic.

Recent Articles

 How Do Ants Find Their Way Home?


 Resetting the Body's Thermostat with a Molecular On/Off Switch

 Mind Aglow: Scientists Watch Thoughts Form in the Brain

Close

Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs
 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Mental Health

U.S. Mental Health Chief: Psychiatry


Must Get Serious about Mathematics
New NIMH chief Joshua Gordon says he will focus on quick wins, brain circuits and mathematical rigor

 By Alison Abbott, Nature magazine on October 28, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Credit: CATHERINE MACBRIDE Getty Images
Advertisement |
Report Ad

The US National Institute of Mental Health (NIMH) has a new director. On September 12, psychiatrist Joshua
Gordon took the reins at the institute, which has a budget of US$1.5 billion. He previously researched how genes
predispose people to psychiatric illnesses by acting on neural circuits, at Columbia University in New York. His
predecessor, Thomas Insel, left the NIMH to join Verily Life Sciences, a start-up owned by Google’s parent company
Alphabet, in 2015. Gordon says that his priorities at the NIMH will include “low-hanging clinical fruit, neural circuits
and mathematics—lots of mathematics", and explains to Nature exactly what that means.

What do you plan to achieve in your first year in office?


I won’t be doing anything radical. I am just going to listen to and learn from all the stakeholders—the scientific
community, the public, consumer advocacy groups and other government offices.

But I can say two general things. In the past twenty years, my two predecessors, Steve Hyman [now director of the
Stanley Center for Psychiatric Research at the Broad Institute in Cambridge, Massachusetts] and Tom Insel,
embedded into the NIMH the idea that psychiatric disorders are disorders of the brain, and to make progress in
treating them we really have to understand the brain. I will absolutely continue this legacy. This does not mean we are
ignoring the important roles of the environment and social interactions in mental health—we know they have a
fundamental impact. But that impact is on the brain. Second, I will be thinking about how NIMH research can be
structured to give pay-outs in the short-, medium- and long-terms.

How has neuroscience changed since you completed your residency in 2001?
The advent of incredibly powerful tools to observe and alter activity in a subset of neurons, such as optogenetics, has
been transformational. It is allowing us to get at questions of how neural circuits produce behaviour—a research
approach that may soon generate new treatments for psychiatric disorders.

Which of the recent NIMH programmes do you find particularly exciting?


One is the Human Connectome Project. The project has scanned the brains of more than a thousand healthy people to
generate individual maps of their neural circuitry, the ‘wiring’ in their brains that accounts for their particular
personalities. At the NIMH, we have created standardized databases, designed by the scientific community, to store
this information. The Human Connectome Project is going to be a tremendous resource for the field—maybe not quite
as impactful as the Human Genome Project, but on that scale, I think.

A clinical programme that deserves as much attention, but perhaps doesn’t get it, is the Coordinated Speciality Care
project for individuals facing their first psychotic episode. Some small studies have shown that coordinating different
clinical and social-support programmes helps individuals to cope better.

Is this an example of what you call low-hanging fruit?


Yes. We are now looking for similarly significant clinical problems where good, evidence-based interventions exist but
are not widely adopted. For example, we have a range of screening tools that we think can help reduce the suicide
rate, which has been rising in the United States for unclear reasons. It could be advantageous to incorporate universal
suicidality screening as a matter of routine into all emergency rooms. People often present in emergency rooms with
injuries that result from suicide attempts but don’t admit to it—unless they are explicitly asked.

What about medium-term pay-outs?


Neural circuits could be delivering treatments in 10 or 15 years. We don’t yet know exactly which circuits we would
want to modify to treat psychiatric disorders in humans. But now is the time to start thinking about which tools we
are going to need to make this translational step possible, and invest in them.

Most work on neural circuits has been done in genetically modified mice, where it is relatively easy to control the
activity of a few very specific cells in a particular brain area using tools such as optogenetics. We’ll need safe methods
for humans. Should we be thinking in terms of viruses that can be directed to, and change activity in, specific
neurons? Or should we be thinking of ways to stimulate or inhibit these cells indirectly, using transcranial magnetic
stimulation or deep-brain stimulation, for example?

And the long term?


The really transformative treatments that are going to change mental-health care in the long term will depend on us
learning how the brain works as a whole. We are all tempted to reduce the huge complexity of the brain into
understandable chunks. But to appreciate and exploit that complexity, we will need to be able to integrate everything
we know, from molecular biology to behaviour, into our models of how the brain works. That requires serious math.
How does the structure of a neuron affect its integration into a circuit? How does that circuit affect the neural system
that it fits into? How does the dynamic activity in these neural systems drive behaviour? Fully characterizing each of
these levels and then integrating them across scales requires a level of mathematical rigour that most of us, including
myself, have not really brought to bear on the problem.

Isn’t the mathematics going to get very difficult for neuroscientists?


It’s not so difficult—I’m not saying that we are going to need string theorists! It’s just a question of appropriate
training in math for students. In the future, I hope that every experimentalist will also be a theoretician. But at this
stage we need to encourage experimental neurobiologists to form long-term interdisciplinary collaborations with
theoreticians, mathematicians or physicists.

We need to inject more math into every level of the NIMH portfolio. Math can also have a short-term impact in
psychiatry for things such as predicting individual responses to drugs and improving precision medicine more
generally.

In its move to a circuits-based approach, the NIMH introduced the Research Domain Criteria
(RDoC), which encourages clinical researchers to investigate specific behaviours rather than broad
diagnoses. It is widely disliked: will you be maintaining it?
Clinical neuroscience has typically tried to identify the neurobiology that underlies diagnoses [such as depression].
That hasn’t got us very far. Maybe if we instead try to understand the neurobiology underlying the various domains of
behaviour [such as apathy], we’ll get better insight. I see RDoC as something potentially very valuable, something I
am likely to keep—although it may need a few tweaks to extract the most value out of it.

Are non-human primates still necessary in neuroscience research?


Most of our knowledge about the brain has been gained in mice. It is hard for me to believe that we’ll really be able to
translate the knowledge that we have won in mice into the design of new treatments for humans without going
through an intermediate species with an elaborated prefrontal cortex and a large brain. So unfortunately, yes, I think
we do still need to use non-human primates. We need to do so judiciously though—the welfare of animals is
fundamental, and we need to minimize the numbers of all of the animals that we use.
This article is reproduced with permission and was first published on October 26, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Alison Abbott

Alison Abbott works for Nature magazine.

Recent Articles

 Refugees Struggle with Mental Health Problems Caused by War and Upheaval
 Turkey Purges Universities after Failed Coup

 Researchers Reeling as the U.K. Votes to Leave the E.U.

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 Autism Study Finds Early Intervention Has Lasting Effects

 Icy Heart Could Be Key to Pluto's Strange Geology

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In
 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

U.S. Sharpens Surveillance of


Crippling Solar Storms
Next-generation space weather model will map the danger facing power grids

 By Alexandra Witze, Nature magazine on September 20, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Bursts of solar activity can send a stream of charged particles towards Earth. Credit: NASA, Solar Dynamics
Observatory
Advertisement |
Report Ad

In the fight to protect Earth from solar storms, the battle lines are drawn in space at a point 1.6 million kilometres
away. There, a US National Oceanic and Atmospheric Administration (NOAA) satellite waits for electrons and
protons to wash over it, a sign that the Sun has burped a flood of charged particles in our direction.

As early as the end of this month, NOAA should have a much better idea of just how dangerous those electromagnetic
storms are. The agency will begin releasing forecasts that use a more sophisticated model to predict how incoming
solar storms could fry electrical power grids. It will be the clearest guide yet as to which utility operators, in what
parts of the world, need to worry.

“This is the first time we will get short-term forecasts of what the changes at the surface of the Earth will be,” says Bob
Rutledge, lead forecaster at NOAA’s Space Weather Prediction Center in Boulder, Colorado. “We can tell a power-grid
customer not only that it will be a bad day, but give them some heads-up on what exactly they will be facing.”

Powerful solar storms can knock out radio communications and satellite operations, but some of their most
devastating effects are on electrical power grids. In 1989, a solar storm wiped out Canada’s entire Hydro-Québec grid
for hours, leaving several million people in the dark. In 2003, storm-induced surges fried transformers in South
Africa and overheated others at a nuclear power plant in Sweden. But if a power company knows that a solar storm is
coming, officials can shunt power from threatened areas of the network to safer ones or take other precautions.

Until now, NOAA had warned of solar activity using the planetary K-index, a scale that ranks the current geomagnetic
threat to the entire Earth. The new ‘geospace’ forecast, which draws on more than two decades of research, comes in
the form of a map showing which areas are likely to be hit hardest (G. Tóth et al. J. Geophys. Res. Space
Phys. 110, A12226; 2005).
Knowing that Canada, for instance, will be hit harder than northern Europe helps grid operators, says Tamas
Gombosi, a space physicist at the University of Michigan in Ann Arbor who helped to develop the model. He
compares it to having a hurricane forecast that says a storm will hit Florida, rather than just somewhere on the planet.
Nature, September 20, 2016, doi:10.1038/537458a; Source: NOAA

Magnetosphere model
Space-weather forecasting is as rudimentary as conventional weather forecasting was three or four decades ago, says
Catherine Burnett, space-weather programme manager at the UK Met Office in Exeter. Researchers have developed
different models to describe various portions of the Sun–Earth system, but linking them into a coherent framework
has been difficult. The Michigan approach combines 15 models that collectively describe the solar atmosphere
through interplanetary space and into Earth’s magnetic realm. The NOAA forecast incorporates three of those: one
model describing Earth’s entire magnetosphere, another focusing on the inner magnetosphere and one for electrical
activity in the upper atmosphere.

The inner magnetosphere chunk is crucial to the model’s overall success, says developer Gábor Tóth at the University
of Michigan. It describes how energetic particles flow and interact as they approach Earth’s poles, and how the
particles affect magnetism at the planet’s surface. Alerts can provide roughly 20 minutes to one hour of warning.

NOAA’s improved forecasts are part of a push by US agencies to implement a national space-weather strategy issued
last year by the White House. Regulators will also soon require power-grid operators to produce hazard assessments
that include the threat of solar storms. “Without those two pieces, we wouldn’t have remotely the interest we have
now,” says Antti Pulkkinen, a space-weather researcher at NASA’s Goddard Space Flight Center in Greenbelt,
Maryland. “It really has changed the game.”

NOAA plans to continue refining its forecasts as new research rolls in. The possible improvements include
incorporating how the geology beneath power grids affects the intensity of a solar storm. Fluctuating magnetic fields
can induce electrical currents to flow in the ground, which sets up further problems for transmission lines. “All of this
is terrifically complicated,” says Jeffrey Love, a geomagnetics researcher at the US Geological Survey in Golden,
Colorado.

In their latest paper, Love, Pulkkinen and their colleagues describe the most detailed map of these ‘geoelectric
hazards’ across part of the United States (J. J. Love et al. Geophys. Res. Lett. http://doi.org/bqpm; 2016). Of the
areas surveyed so far, those at the highest risk are the upper Midwestern states of Minnesota and Wisconsin, where
complex geology induces strong electrical currents.

Adding in 3D models of these ground currents will improve the next generation of NOAA forecasts, Rutledge says.
“This is by no means the end.”

This article is reproduced with permission and was first published on September 20, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Alexandra Witze

Alexandra Witze works for Nature magazine.

Recent Articles

 Icy Heart Could Be Key to Pluto's Strange Geology


 Jupiter Mission's Computer Glitch Delays Data-Gathering

 Incoming! Space Rocks Strike the Moon More Than Expected


Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Space

Universe Has 10 Times More Galaxies


Than Researchers Thought
The new estimate could help astronomers better understand how galaxies form and grow

 By Davide Castelvecchi, Nature magazine on October 14, 2016


Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon

Credit: NASA, ESA, and The Hubble Heritage Team (STScI/AURA)


Advertisement |
Report Ad

The observable Universe contains about two trillion galaxies—more than ten times as many as previously estimated,
according to the first significant revision of the count in two decades.

Since the mid-1990s, the working estimate for the number of galaxies in the Universe has been around 120 billion.
That number was based largely on a 1996 study called Hubble Deep Field. Researchers pointed the Hubble Space
Telescope at a small region of space for a total of ten days so that the long exposures would reveal extremely faint
objects.

This view encompassed galaxies up to 12 billion light years away, which we see as they existed less than two billion
years after the Big Bang. Astrophysicists then counted the galaxies within that narrow field of view and extrapolated
the number to the full sky—under the assumption that it would look similar in all directions—to get to the 120 billion
figure.

However, there weren't enough galaxies in the Hubble Deep Field image to account for the density of matter
distributed throughout the Universe. The missing matter had to be in the form of galaxies too faint to see, as gas and
dark matter. “We always knew there were going to be more galaxies than that,” says astrophysicist Christopher
Conselice of the University of Nottingham, UK. “But we didn't know how many existed because we couldn't image
them.”

More recent deep-field studies conducted using Hubble—after NASA astronauts upgraded the observatory in 2009—
and other telescopes enabled Conselice and collaborators to count visible galaxies out to distances of 13 billion light
years. They were able to plot the number of galaxies of a given mass that corresponded to various distances away from
Earth. The researchers then extrapolated their estimates to encompass galaxies too small and faint for telescopes to
pick up. Based on this, they calculated that the observable Universe should contain 2 trillion galaxies. The paper1 will
be published in the Astrophysical Journal.

Moments in time
The team’s count was not too surprising, says astronomer Steven Finkelstein at the University of Texas at Austin, but
it’s still helpful to put a number on it. “I don't know of anyone who has done this before,” he says. Conselice says that
theorists had expected the number to be even higher; he and his collaborators now plan to look into this discrepancy.

At present, researchers can only directly observe about 10% of the 2 trillion galaxies. But that will change in two years
once Hubble's successor, the James Webb Space Telescope, is deployed, Conselice says. That telescope should also be
able to peer much further back in time, to see how galaxies started to form, he adds.

The study might lead to an improved understanding galaxies by refining galaxy formation simulations and enabling
more detailed assessments of how they grow.

But for now, his results are consistent with the current general theory of how galaxies form, in which most start very
small, and then undergo a furious period of mergers and acquisitions, says Debra Elmegreen, an astronomer at Vassar
College in Poughkeepsie, New York.

Because the Universe as seen today is a snapshot in time, many of the galaxies included in the new estimate no longer
exist. They have merged into larger galaxies in the billions of years it took their light to reach Earth. So the current
number of galaxies is therefore expected to be much lower than 2 trillion.

This article is reproduced with permission and was first published on October 14, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Davide Castelvecchi

Davide Castelvecchi is a senior reporter at Nature in London covering physics, astronomy, mathematics and computer
science.

Recent Articles

 Can We Open the Black Box of AI?


 Deep Learning Boosts Google Translate Tool

 China Launches Second Space Lab

Nature magazine

Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

Close
Search

 The Sciences
 Mind

 Health

 Tech

 Sustainability

 Education

 Video

 Podcasts

 Blogs

 Store

SubscribeCurrent Issue
 Cart
 Sign In

 Register

 Facebook

 Twitter

 Google+

 YouTube

 RSS

Public Health
Vipers, Mambas and Taipans: The
Escalating Health Crisis over
Snakebites
Snakes kill tens of thousands of people each year, but experts can’t agree on dealing with an antivenom shortage

 By Carrie Arnold, Nature magazine on August 30, 2016

Share on Facebook
Share on Twitter
Share on Reddit
Email
Print
Share via
 Google+
 Stumble Upon
Bites from venomous snakes such as the Jameson's mamba (Dendroaspis jamesoni) are a public-health crisis. Credit:
WIKIMEDIA COMMONS
Advertisement |
Report Ad

Abdulsalam Nasidi's phone rang shortly after midnight: Nigeria's health minister was on the line. Nasidi, who worked
at the country's Federal Ministry of Health, learnt that he was needed urgently in the Benue valley to investigate a
cluster of dying patients. People were bleeding out of their noses, their mouths, their eyes. Names of spine-chilling
viruses such as Ebola, Lassa and Marburg raced through Nasidi's mind.

When he arrived in Benue, he found people splayed on the ground and tents serving as makeshift hospital wards and
morgues. But Nasidi quickly realized that the cause of the mystery illness was millions of times larger than any virus.
The onset of the rainy season had brought the start of spring planting for farmers in the valley, and flooding had
disturbed the resident carpet vipers (Echis ocellatus). Many farmers were simply too poor to buy boots—and their
exposed feet became targets for the highly venomous snakes.

Nasidi wanted to help, but he found himself with limited tools. He had only a small amount of antivenom with which
to neutralize the toxin—and it quickly ran out. Once the hospital exhausted its supply, people stopped coming. No one
knows how many people were killed. In an average year, hundreds of Nigerians die from snakebite, and that rainy
season, which started in 2012, was far from average.

Snakebites are a growing public-health crisis. According to the World Health Organization, around 5 million people
worldwide are bitten by snakes each year; more than 100,000 of them die and as many as 400,000 endure
amputations and permanent disfigurement. Some estimates point to a higher toll: one systematic survey concluded
that in India alone, more than 45,000 people died in 2005 from snakebite—around one-quarter the number that died
from HIV/AIDS (see 'The toll of snakebite'). “It's the most neglected of the world's neglected tropical diseases,” says
David Williams, a toxinologist and herpetologist at the University of Melbourne, Australia, and chief executive of the
non-profit organization Global Snakebite Initiative in Herston.

Many of those bites are treatable with existing antivenoms, but there are not enough to go around. This long-standing
problem became international news in September 2015, when Médecins Sans Frontières (MSF, also known as Doctors
Without Borders) announced that the last remaining vials of the antivenom Fav-Afrique, used to treat bites from
several of Africa's deadliest snakes, were about to expire. The French pharma giant Sanofi Pasteur in Lyons had
decided to cease production in 2014. MSF estimates that this will cause an extra 10,000 deaths in Africa each year—
an “Ebola-scale disaster”, according to Julien Potet, a policy adviser for MSF in Paris. Yet, because most of those
affected by snakebites are in the poorest regions of the world, the issue has been largely ignored.

Spotlight on snakes
In May, however, the crisis was discussed for the first time at the annual World Heath Assembly meeting in Geneva,
Switzerland. The world's handful of snakebite specialists gathered in a small conference room in the Palais des
Nations—although they shared concern over the problem, they were split about how to solve it. Many want to use
synthetic biology and other high-tech tools to develop a new generation of broad-spectrum antivenoms. Others argue
that existing antivenoms are safe, effective and low cost, and that the focus should be on improving their production,
price and use. “From the physician perspective, patient care and public health comes before anything new,” says
Leslie Boyer, who directs an institute dedicated to antivenom study at the University of Arizona, Tucson.

The debate mirrors those around many other developing-world challenges, from improving agriculture to providing
clean drinking water. Do people need high-tech solutions, or can cheaper, lower-tech remedies do the job? The
answer is simple to Jean-Philippe Chippaux, a physician working on snakebite for the French Institute of Research for
Development in Cotonou, Benin. “We have the ability to fix this problem now. We just lack the will to do it,” he says.

Every December, Williams sees snakebite victims flood into the Port Moresby General Hospital in Papua New Guinea.
Nearly all of them were bitten by the taipan (Oxyuranus scutellatus), one of the world's deadliest snakes, which
emerges at the start of the rainy season. The venom stops a victim's blood from clotting, paralyses muscles and leads
to a slow, agonizing death. It seems a far cry from Australia, where Williams is based. “There's this incredible
suffering just 90 minutes away from the modern world,” he says.

Yet Williams knows that these people are the lucky ones. The hospital ward, which might be treating as many as eight
taipan victims at any time, is often the only place in the country with antivenom drugs. Without them, some 10–15%
of all snakebite victims die; with them, just 0.5% do. The situation is reflected around the world. “Many countries
don't want to admit that they have such a primeval-sounding problem,” Chippaux says.

The method used to make antivenom has changed little since French physician Albert Calmette developed it in the
1890s. Researchers inject minuscule amounts of venom, milked from snakes, into animals such as horses or sheep to
stimulate the production of antibodies that bind to the toxins and neutralize them. They gradually increase doses of
venom until the animal is pumping out huge amounts of neutralizing antibodies, which are purified from the blood
and administered to snakebite victims.

Across much of Latin America, government-funded labs typically produce antivenoms and distribute them free of
charge. But in other areas, especially sub-Saharan Africa, these life-saving medications are too often out of reach.
Many governments lack the infrastructure or political will to purchase and distribute antivenom. Bribery and
corruption often jack up the price of an otherwise inexpensive drug from a typical wholesale cost of US$18 to $200
per vial to a retail cost between $40 and $24,000 for a complete treatment, according to a 2012 analysis2. Not all
hospitals and clinics can afford the antivenom, and some won't risk buying it because their patients either can't pay
for it or won't, because they doubt that it really works.

With no reliable market for the medicines, some pharmaceutical companies have halted production. Sanofi Pasteur
stopped making Fav-Afrique because, at an average retail price of around $120 per vial, it just couldn't sell enough to
make production worthwhile. A total of 35 government or commercial manufacturers produce antivenom for
distribution around the world, but only 5 now make the drugs for sub-Saharan Africa. In the absence of medicines,
snakebite victims have been known to drink petrol, electrocute themselves or apply a poultice of cow dung and water
to the bite, says Tim Reed, executive director of Health Action International in Amsterdam.

But there are also problems with the drugs themselves, says Robert Harrison, head of the Alistair Reid Venom
Research Unit at the Liverpool School of Tropical Medicine, UK. They often have a limited shelf life and require
continuous refrigeration, which is a problem in remote areas without electricity. And many are effective against just
one species of snake, so clinics need an array of medicines constantly on hand. (A few, such as Fav-Afrique, combine
antibodies to create a broad-spectrum product.)
Venoms from spiders and scorpions typically have only one or two toxic proteins; snake venoms can have more than
ten times that amount. They are a “pandemonium of molecules”, says Alejandro Alagón, a toxinologist at the National
Autonomous University of Mexico in Mexico City. Researchers do not always know which proteins in this toxic soup
are the damaging ones—which is why some think that smarter biology could help.

Old problem, new solution


Ten years ago, teams led by Harrison and José María Gutiérrez, a toxinologist at the University of Costa Rica in San
José, began parallel efforts to create a universal antivenom for sub-Saharan Africa using 'venomics' and
'antivenomics'. The aim is to identify destructive proteins in venoms using an array of techniques, ranging from
genome sequencing to mass spectrometry, and then find the specific parts, known as epitopes, that provoke an
immunological response and are neutralized by the antibodies in antivenom drugs. The ultimate goal is to use the
epitopes to produce antibodies synthetically, using cells rather than animals, and develop antivenoms that are
effective against a wide range of snake species in one part of the world.

The scientists have made slow but steady progress. Last year, Gutiérrez and his colleagues separated and identified
the most toxic proteins from a family of venomous snakes known as elapids (Elapidae). By combining information
about the abundance of each protein and how lethal it is to mice, the team created a toxicity score to indicate how
important it was to neutralize a protein with antivenom, a first step towards making the treatment.

In March this year, a Brazilian team reported that they had gone further, designing short pieces of DNA that encode
key toxic epitopes in the venom of the coral snake (Micrurus corallinus), a member of the elapid family. Mice were
injected with the DNA using a technique that enabled some to generate antibodies against coral-snake venom, and the
group enhanced the mice's immune responses by injecting them with synthetic antibodies manufactured in bacterial
cells. These and other advances led Harrison to estimate that the first trials of new antivenoms in humans could be
just three or four years away. But with so few researchers working on the problem, a paucity of funding and the
biological complexity of snake venoms, he and others admit that this is an optimistic prediction.

Despite the growing literature on antivenomics, Alagón and Chippaux aren't convinced that the approach will help.
Alagón estimates that newly developed antivenoms would need to be priced at tens of thousands of dollars per dose to
be financially viable to produce, and that no biotech or pharma company would manufacture one without substantial
government subsidies. Compare that, he says, to the rock-bottom price of many existing antivenoms. “You can't get
cheaper than that,” he says. “We can make an entire lot of antivenoms in one day using technology that's been
available for 80 years.”

Finding someone to produce new medications might be a greater challenge than actually developing them, Williams
acknowledges: governments or non-governmental organizations (NGOs) will almost certainly have to step in to help
to defray the development costs. But he argues that now is the time to research alternative approaches. These could
“revolutionize the treatment of snakebite envenoming in the next 10–15 years”, Williams says.

The room where it happened


All these tensions, brewing for nearly a decade, came to a head at the Geneva meeting in May. Around 75 scientists,
public-health experts and health-assembly delegates crowded around three long tables in a third-floor conference
room at the United Nations Headquarters. Spring rain pelted the tall windows.

Lights were dimmed, and then the screams of a toddler filled the room. A short documentary co-produced by the
Global Snakebite Initiative told the story of a girl bitten by a cobra whose parents carried her for days over rocky
roads in Africa to find antivenom. They arrived in time—the girl survived—but she lost the use of her arm. Her sister
had already died after a bite from the same snake.

Convincing attendees of the scale of the problem was the meeting's primary goal; how to solve it came next. For 90
minutes, scientists and NGOs made short, impassioned speeches laying out the scope of the issue and the variety of
problems that they faced. At the centre of each presentation was the same message: we need more antivenom.

But the meeting was strained. Chippaux and representatives of the African Society of Venomology were disappointed
and angry that so few Africans had been invited to speak, even though the continent is where antivenom shortages are
most acute. “Our voice, our issues, were completely overlooked,” Chippaux says. Seated at the front of the room,
group members whispered and gestured frantically to each other, and Chippaux barely managed to keep them from
storming out.

They argue that the current antivenom shortage stems from Africa's reliance on foreign companies and governments
for its drugs, and that the only solution lies in building up infrastructure in Africa to produce its own high-quality
antivenom. Alagón views antivenomics as a dangerous diversion. “It's distracting many brilliant minds and resources
from improving antivenoms using existing technology,” he says. “Perhaps by 2050 this will be the standard technique,
but the problem is now.”

Williams and Gutiérrez take a middle ground. They feel that the problem requires attacks on all fronts. As well as
innovation, Gutiérrez calls for existing manufacturers to step up the production of current drugs.

There are signs of this happening already. Latin America has a long history of producing antivenoms both for its own
needs and for those of countries around the world, and even before Sanofi Pasteur announced that it would cease
production of Fav-Afrique, Costa Rica, Brazil and Mexico were testing antivenoms for different parts of Africa. One
product, EchiTAb-Plus-ICB, is produced by Costa Rica and effective against a range of African viper species; it
completed clinical trials in 2014 and is now available for use. Several other antivenoms are expected to be ready in the
next two years. The drugs should be affordable: government labs in Costa Rica have already indicated that they will
not seek to make money from the antivenoms, just recoup their expenditures.

But beyond that, the way forward remains murky. Williams knows that the World Heath Assembly meeting was just a
start. Inevitably, more meetings will be needed to produce a concrete action plan. But the discussion still gave him
and some others a renewed sense of hope that the international community is beginning to take snakebite seriously—
momentum they hope to build on by banging away at the topic at conferences and in the media.

Boyer says that whatever solution the snakebite field decides on, the most important thing is to “break the cycle of
antivenom failure in Africa”. Doing that requires building trust from governments, health-care workers and the public
that the drugs are safe and effective, that clinics will have antivenom on hand, and that people will be able to afford
treatment. “Without that, you've got nothing,” Boyer says. Educating local clinics on how to care for snakebite victims
and administer treatments in a timely manner would also go a long way towards preventing deaths.

Speaking of the devastation he saw in Benue, Nasidi says that something as simple as providing boots for poor
farmers would have helped to prevent much of the suffering and death that he witnessed. It's perhaps the ultimate in
low-tech methods in snakebite protection: shielding vulnerable human skin.”

This article is reproduced with permission and was first published on August 30, 2016.

Advertisement |
Report Ad

ABOUT THE AUTHOR(S)


Carrie Arnold

Carrie Arnold is a freelance science writer living in Virginia. She is a frequent contributor to Scientific American
Mind.

Recent Articles

 Can an App Save an Ancient Language?


 Virus Pumps Up Male Muscles--in Mice

 Synthetic Biology Bites Back at Global Snake Antivenom Shortage

Nature magazine
Recent Articles

 Climate Change Could Flip Mediterranean Lands to Desert


 U.S. Mental Health Chief: Psychiatry Must Get Serious about Mathematics

 Autism Study Finds Early Intervention Has Lasting Effects

S-ar putea să vă placă și