Sunteți pe pagina 1din 258

Universal Energy

Blueprints to Solve Resource Scarcity and Build a Better World

Cameron MacPherson
Copyright 2017 Cameron MacPherson

eBook/Kindle edition 1.0

Important Citation Information

Universal Energy is a work published over several mediums: paperback, electronic

document and a virtual online book at This version is an
electronic document and has all citations cited as hyperlinks that take you directly to
the source in question in your web browser. A free companion PDF with citations may
always be downloaded by visiting

For more information about the author, visit

Universal Energy is a solution to resource scarcity and climate change. It
is a framework of the best energy technologies we have available,
designed to work together from the ground up. By design, this can
dramatically increase the potential energy at our command while
lowering the costs and environmental impact of energy generation.

Making energy inexpensive and abundant allows us to synthetically

produce unlimited critical resources, solving our most pressing problems
while empowering us to build our civilization to greater heights. This
writing details blueprints for how Universal Energy can be achieved and
how we can use it to further revolutionize our way of life.


To you and the potential you hold to make the world a better place.

Table of Contents

Foreword: Why We Fight p. 7

Preface: Our Future, Bright and Hopeful p. 16

Chapter One: Mindset p. 21

Chapter Two: Thorium Power p. 30

Chapter Three: Solar Infrastructure p. 51

Chapter Four: Water and Hydrogen p. 72

Chapter Five: The National Aqueduct p. 85

Chapter Six: The Worlds Largest Battery p. 98

Chapter Seven: Everybody Eats p. 105

Chapter Eight: Materials and Recycling p. 119

Chapter Nine: The End of Resource Scarcity p. 130

Chapter Ten: Advanced Infrastructure p. 141

Chapter Eleven: A Future Worth Having p. 173

Appendix: p. 182

Universal Energy Cost Estimate p. 183

Solar Road Panel Pricing p. 196

Rails to Trails Conservatory Road Cost Estimate p. 198

The Coming Resource Crisis p. 200

Source and Citation Policy p. 254

Every gun that is made, every warship launched, every rocket fired signifies, in the final sense,
a theft from those who hunger and are not fed, those who are cold and are not clothed. This
world in arms is not spending money alone. It is spending the sweat of its laborers, the genius
of its scientists, the hopes of its children.

The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is
two electric power plants, each serving a town of 60,000 population. It is two fine, fully
equipped hospitals. It is some fifty miles of concrete pavement. We pay for a single fighter with
a half-million bushels of wheat. We pay for a single destroyer with new homes that could have
housed more than 8,000 people.

This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is
humanity hanging from a cross of iron.

President Dwight D. Eisenhower. April 16, 1953.

Foreword: Why We Fight

War is an eternal fixture of the human condition. Its dominated our history since
civilization became civilization, for countless times have we met on battlefields and
slaughtered millions of people from practically every walk of life. As our methods
evolved from swords and arrows to rifles and aircraft, our efficiency in waging war has
only honed further. From Hannibal to Hiroshima, the ever-increasing trail of
destruction left in our wake has been testament to this fact.

Through these actions we have deemed this destruction justified, telling ourselves that
it was for the greater good, for our national security even for human advancement.
And sometimes weve been right. War has achieved vital goals: winning independence,
ending American slavery and defeating truly wicked foes that sought to cast darkness
over our civilization. But while these ends may have been necessary, the means we took
to achieve them do not share the laudable morality they're often afforded. In that
mindset we have been deceived for while it may be an effective tool for a necessary
end, the true nature of war is not a force for good. It is dark, primitive and evil.

Stripping away rhetoric and propaganda, war is simply the skillful execution of
organized violence as commanded by a governing authority. Of the millions slain by
this violence, each is not a statistic on a chart nor are they a faceless uniform or civilian.
They are a person just like you and I. They are someones child, someones sibling,
someones spouse or someones friend. With any war thats ever been fought, how
many of the combatants have been legitimately bad people with a true desire to harm
others by their nature? Undoubtedly some may have been, but the reality is most of
them are similar to us and were simply called upon to fight for their country just as we
would if we were in their circumstances. The same is true of civilians caught in the

The nature of war blinds us from this realization because it forces us to view our
adversaries as sub-human, evil and deserving of death. Thats why war propaganda
exists because society needs to be sold that message to support a war in the first place.
Accordingly, we remain desensitized to the suffering war brings because most conflicts
today arent fought in our backyards and their impact to us is minimal, leaving the
ugliness of war out of sight and mind.

But whether or not we want to face it, the suffering war causes would be no less terrible
if it arrived at our doorsteps. Imagine someone you love were cut down by a bullet or
dismembered by a bomb, however many years of love, experience and potential that
made them who they were gone in an instant. Imagine it happening beside you, your
last memory of them dying in your arms. Imagine this because thats been the cold and
cruel reality faced by millions of people just like us, and their pain was no less real than
ours would be if we were in their position.

That's the reality war brings. That's the reality war ignores.

In the first World War, approximately 1.5 million French, British and German troops
were killed at the Battle of the Somme. That's more than every combat death the United
States has suffered in every war we've ever fought combined, and the Battle of the
Somme was fight over a time span of four months. Within other WW I battles such as the
Battle of Verdun, there were cases where more than 20,000 troops were killed in a single
day and hundreds of thousands of troops were killed in a week.

The deaths of WWII are even more staggering, totaling some 70 million lives, which this
excellent, sobering video explains more eloquently than I ever could.

Loss of life to this scale is nothing short of horrific, and each death carries a horrible
story because people killed through war are just that: people with hopes and dreams
people that you would likely enjoy talking to if you met them in person. One million
dead? Those are the children of two million parents. The friends of tens of millions
more. Each one suffering a deep sense of loss, forever knowing that their loved one met
a violent end that most likely was fearful, painful and alone. Even on an individual
basis the impact of that anguish is devastating and not just to a family or a social
circle, but to entire communities.

Now imagine that happening a million times in a row.

Now imagine it happening 10 million times.
Now imagine it happening 70 million times.

As a construct, war is an instrument of extreme horror and death on a scale unrivaled.

By most objective accounts, it is the single-worst thing in the world. Yet on the other
hand, wars presence throughout history is ubiquitous. But how is this possible? War is
this incredibly terrible thing, but yet we're so addicted to it. And even more than that
we expect it, celebrate it even embrace it! But that's crazy when you just think about it,
isn't it? I mean if we were to think about being addicted to doing something awful like
staging mass suicides or habitually drinking toxic chemicals, we'd find it revolting.

Why on Earth would we engage in such a thing? Even more so, why would we do it to
the point of addiction? We'd consider it pathological, a function of madness. Yet here
we are beating the war drum once again, as humanity has done for millennia.

That's what makes war so maddening as a concept. It's clearly an abomination, yet at
the same time its also an addiction of the human condition. This begs a critically
important question, one that we too rarely ask ourselves as a people: why do we do this
to each other? Why do we justify and embrace this and unleash its horror again and

It's a question that prompts a hundred answers. Many people say religion. Many others
say poverty, greed, even language or cultural differences. Others still say it's just how
human nature works. In some degree, all of these answers are rooted in truth and can
be validated by history to some extent. Yet while they scratch the surface of truth, they
are all secondary to a deeper cause.

To explain what I mean by that, Ill share a quote from a brilliant Prussian general, Carl
Von Clausewitz:

"War is not merely a political act but a real political instrument, a continuation of political
intercourse, a carrying out of the same by other means."

Simply stated, war is the embodiment of state policy through alternate means, in this
case hard power. But why does state policy seek to wage war to begin with? We point
to the previously mentioned subjects of religion, nationalism and ideology, but in the
realpolitik sense national leaders dont spend enormous fortunes and millions of lives on
wars fought over ideas. Leaders are usually shrewd as they must be to obtain and thus
retain power, and no nation lasts long if its leaders seek returns on investments as poor
as ideological or theological conflict. Religion, ideology and nationalism help fan the
flames and earn public support once a call to arms is sounded, but they are not the
spark that ignites them.

The spark for state policy to wage war instead must be for an investment in a
worthwhile goal, and with few exceptions, that goal is nigh always this:



It's not an absolute as nothing truly is (WWI, for example, wasnt really a resource
conflict, and plenty of Roman wars were fought for the glory of Rome), but in most
cases war is fought to acquire resources, retain resources or secure logistical advantages
in furtherance thereof for the economic benefit of a society. Above all else, that is why
we fight.

This is a reality echoed throughout most of history:

We all know the Revolutionary War was fought to secure independence from British
colonialism. But why would Great Britain care about an American colony 3,000 miles
across the Atlantic? The honor of the empire? Not really it was our taxes, products,
and resources that Great Britain coveted so greatly. Thats the entire reason for
imperialism: one nation colonizes another to take their resources for economic
advancement and/or to control the strategic value of an area to allow for greater
resource acquisition. Conversely, while our hatred of Great Britains oppression was
indeed a motivating factor in our decision to revolt, it was our resources that were
being taken from us without fair representation in government that ultimately sparked
and sustained revolution.

Its common knowledge that the Civil War was fought over slavery. But did
Confederate states really secede because they liked owning people? Its not that cut and
dry. The southern economy was powered largely by cheap slave labor for farming and
agriculture, so when the legal status of slavery was threatened, southern states seceded
as they knew their economy would crumble without it. On the other hand, did the
Union really care so much about slavery that they were willing to split the country
apart through a bloody, bitter conflict? The chilling immorality of slavery was certainly
a factor, but the deeper reality is that the powers that be within the Union were far more
concerned about how much agricultural and economic importance southern states had
to the fledgling nation than they ever were about the freedom of slaves.

In the 1930s, might we suppose Adolph Hitler rose to power with mass populist
support because he sought to advance racial purity? To Hitler perhaps, but it was
actually the economic depression caused by the Treaty of Versailles that prompted the
German people to rally around the man who promised to make Germany great again
(actual quote) and in turn sparked the worst conflict and act of genocide in history. Had
Germany been a thriving economy in the 1930s, we most likely would have never
known his name.

The Cold War was a lengthy chess match between two superpowers with enough
nuclear weapons to end life on the planet, but did we fear each other so much because

of governing ideology? Not likely. Considering the alliances the United States has
maintained with autocratic governments: Pinochets Chile, military juntas in Brazil, the
Contras in Nicaragua, mujahedeen in Afghanistan and the litany of Middle Eastern
despots throughout the 20th century, the United States has shown few reservations
about sidelining principle for profit. Why was the Soviet Union any different? Because
the aforementioned dictatorships served our economic interests and the Soviets did not.

The Cold War locked us and the Russians in a zero-sum game, where every inch of
ground one gained was one the other lost, and vice versa. We postured over access to
oil and gas reserves, rare earth metals, shipping lanes and the location of military bases
to defend our means of resource acquisition. Russia wasnt an adversary because we
feared communism as a concept, they were adversaries because communism presented
an alternative to American influence, which hamstrung our ability to acquire resources
for the economic benefit of our nation and our allies.

We are now fighting a War on Terrorism throughout the Middle East, but the
primary reason we care about the region at all is because it has large oil reserves and
strategic naval value. The first Iraq war started when Iraq invaded Kuwait, a country
with nearly 10% of the worlds oil. We didnt respond in force because we cared about
Kuwaiti freedom, we and our Saudi allies didnt want their oil to fall into the hands of
Iraqi Strongman Saddam Hussein. While Weapons of Mass Destruction was the
official justification for invading Iraq round II, it remains highly suspicious that its oil
supplies and potential for post-war reconstruction projects were the main target of the
Bush administration which had several high-level officials deeply tied to the oil and
gas industry.

These officials pushed for war by selling bogus intelligence to the United Nations about
weapons that didnt exist, while simultaneously promising to secretly award lucrative
contracts to business interests with close connections to the Federal Government after
the invasion. Concerning Iran, when cooler heads prevailed and walked back the
possibility of war over their nuclear program, half of Congress was furious which just
so happened to be the half deepest in oil industry pockets. I think we can rule out that
being a coincidence.

These examples are selected from hundreds throughout history. From the Nazis, the
British and the French, to the Persians, the Mongols and the Romans, whether in this
decade, this century or this millennium, the motivations for war have largely remained
the same: resources, economy, logistics, power. Fallout is right: at the heart of it, war
never changes.

This situation exists because resources power the societies we have evolved to depend
on to survive. For example: where do you get food? Do you grow crops? Raise pigs,
cows or chickens? Fish from the oceans? Most likely, you do not. Unless you are in the
small minority, you get your food from the supermarket. You source water from a
faucet. You buy your clothes from a department store. If your safety is threatened, you
call a police officer a government employee and a stranger to come protect you.

Society removes the need to know how to survive on our own, all we need to know is
how to do our jobs. Our jobs help maintain our economy, which is why society's
functions pay us a certain level of resources (money) based on the perceived value of
the job we perform. What we do vocationally has little to do with our survival, it just
helps make possible the society that supports our survival. If society were to disappear
tomorrow, most of us wouldnt know how to perform the functions that keep us alive.
For the most part, wed be helpless.

Beneath our social system, the buildings we work in, the roads we drive on and the
supermarkets we buy our food from, there is a resource-intensive system that makes
our lives possible and this is a system that we largely ignore. Ever wonder what
happens whenever you flush the toilet? There are 7.4 billion people on the planet, and
together we produce at least 7.4 billion units of bodily waste every day. That rises to 2.6
trillion per year, 26 trillion per decade and thats just from the waste produced by our
bodies. Now think about every piece of trash thats thrown away. Every gallon of gas
delivered to fuel stations. Every product manufactured and stocked on a shelf. The
energy that heats our homes. The water poured from our faucets.

All of those functions are made possible through systems powered by resources. Our
civilization requires immense quantities of resources to operate, resources that have to
come from somewhere. Most of us are oblivious to this need because society and its
processes are managed for us by people who we largely dont interact with. But just
because its out of sight and mind doesnt mean these systems and resource
requirements arent present. We might ignore the true cost of these systems and take
them for granted, yet at the same time without resources and the economy they power,
society cant operate. And if it cant operate, it cant exist.

The people who manage our society understand this reality, as they must in order to
continue its existence. Thus its only natural that our clandestine services work to
further the foreign agendas of corporations or support dictatorships that sell us cheap
resources while turning a blind eye to their human rights abuses. Many react to this
with surprise, but it should be assumed that a state is going to operate for its own

benefit above all else. And as resources power our society and corporations provide the
jobs to sustain a healthy economy, its within our best interests to receive as much of the
global pie as possible because if we dont someone else will take it at our expense.
Whether we like it or not, that's simply the way the world presently works.

The moral implications to actions taken under this mindset might be significant (hence
why clandestine services are clandestine), but thats the consequence of acquiring
resources in a finite, zero-sum world as the alternative is distinctly worse. For as it is the
allocation of resources that powers a nation and its economy, if resource supplies fall
too low its economy stagnates into decline or fails outright which is precisely why
states go to war when resources become scarce.

These facts being as they are, there are those who would correctly point out that there
are sometimes legitimately bad actors who can convince others to commit violence
regardless of the economic climate. Its also true that religion, nationalism and
xenophobia are motivations for people to fight each other as a consequence of human
nature. But conflicts caused by these things are the exceptions, not the rule all the
more so on larger scales.

The divisive factors that lead us to become hostile against those different from ourselves
do not manifest in force when we are flourishing they manifest when we are starving.
Universal prosperity always has been and always will be the natural enemy of war,
while poverty and social decline will always be its harbinger, and both prosperity and
poverty are directly relational to the resources available to and deployed by a state and
its economy.

Find a person who has enough food, a good home and a means for recreation and self-
advancement, and you will see a person who most likely doesnt want to pick up a
weapon and kill a stranger in a foreign land. Yet if a person lacks those things in
perception or reality, their motivation to engage in conflict rises a drive that can be
focused in the form of enmity on any resource-holding actor with the right
encouragement. This fact is true for most every person, and as such it is also true for
governments of men.

Concerning a states decision to engage in warfare, its prohibitively difficult (especially

in western states) to fight a war without public support. And its nearly impossible to
motivate a nation to go to war if the populace is content and prosperous barring the
entry of a threat to rally around. Could you imagine the invasion of Iraq or Afghanistan
if the 9/11 attacks never happened? From there, would Al-Qaeda, the Taliban or ISIS

even exist if the Middle East were economically developed? Not likely theyd have
been quickly defeated and incarcerated because their society would have the means and
public support to stop them, which is what happens to violent antagonists in
prosperous societies.

Nobody is going to convince a rational person to commit terrorism if they and their
communities are secure in life, but in the context of social strife propaganda can twist
causes into appearing noble and warranting of violence. History has revealed this fact
through the rise and fall of most every movement, society and civilization, as it nearly
always has been and will be resources and economy, and the status thereof, that
ultimately draws the line between prosperity and poverty, and by virtue of that, war
and peace.

That is the true nature of war: conflict in furtherance of our endless pursuit for finite
resources and economic advancement, ensuring that our interests are met first while we
disregard the fates of our foreign brethren. It is this pursuit that determines the
application of state policy, and in the context of our present state of affairs it is what
will determine our future as a species. This is because the resources that power our
ever-advancing civilization are running out on a global scale, as is the time we have left
to solve this problem before it sparks ever-larger conflicts. And that is exactly what
Universal Energy is designed to do.

Once we realize we deserve a bright future, letting go of our dark past is the best
choice we ever make.

- Roy T. Bennett

Preface: Our Future, Bright and Hopeful

Id like to ask you a question: when was the last time you felt great about the world?
Not something that put you in a better mood or brightened your day, but something
that made you truly hopeful for humanitys future. Something that made you believe,
deep down, that the world we wake up in tomorrow will be better than the one we
woke up in today. When was the last time you could say that? For most of us, I bet its
been a while.

Thats because its hard not to feel pessimistic in the face of a future that, right now,
feels pretty depressing. And while the perpetual bleakness of our 24-hour news cycle
definitely doesnt help, our instincts are nonetheless not wrong.

How so? As a species, humanity has existed for at least 200,000 years. It took us that
entire length of time to reach two billion people, which occurred around 1930. Yet we
reached 7 billion in 2011, and were on track to hit nearly ten billion by 2050. In another
way of saying, humans have existed for a timespan one hundred times greater than the
timespan between today and the height of the Roman empire. It took 99.9% of that time

to reach 1/4th of our population today. Yet in the past 0.1% of our history, our
population has quadrupled and is on track to quintuple by 2050 seeing our numbers
grow by 500% in the last 1/2000th of our history.

Our rise as a civilization has been powered by natural resources, so as our population
has exploded so has our rate of resource consumption. Today, we are consuming so
many resources on such a large scale that its weakened our planets ability to support
our way of life. Plant and animal species are dying off to the extent that scientists
believe weve caused Earths sixth great extinction event, and they expect most
biodiversity will be gone within the next 200-300 years. Global fish stocks are depleted
upwards of 80%. Since the start of the Industrial Revolution, weve cut down more than
half of the worlds forests. And worldwide drought is increasing to the scale at
which billions of people risk running out of water.

Then theres oil. Although oil presently powers 90% of global transportation, its use is
ubiquitous within industry, agriculture and manufacturing. Oil production from
conventional reserves has almost certainly peaked, forcing ever-greater reliance on
unconventional oil supplies like shale and tar sands. While abundant in the short-term,
oil from these sources is significantly more expensive to extract well into the trillions
of dollars to meet global demand exclusively. When unconventional oil becomes our
primary source of supply, these added costs stand to make oil more expensive. While
advances in electric vehicles and fuel efficiency will naturally help, billions of people
worldwide are entering the middle class and consuming oil on a greatly increased scale.
Once dramatically accelerating demand is applied to a dwindling global resource, its
price spikes skyward.

As water and oil are vital to producing critical goods, food first and foremost, these
circumstances risk grinding the global economy and state of global security to a halt. If
we look at our situation today, we see the United States has been at war for the past
sixteen years as conflicts perpetuate or threaten over much of the globe. Liberal
democracies are retreating in the face of populist, nationalistic movements that view
outside nations as adversaries instead of allies. Due to causes ranging from
environment to conflict, more than 65 million people are globally displaced, creating
millions of refugees that arrive in other nations as often unwelcome and distrusted
aliens. As their numbers increase by 12 million every year, the tensions of mass-
migration only increase as homegrown terrorism spreads while worldwide economic
prospects dim.

Recognizing that 85% of humanity lives in the driest half of the planet, its clear how
adding a state of global resource scarcity and its resulting economic fallout to the mix

would have a deeply destructive effect on our civilization. When we recognize further
that our planet is home to an arsenal of 15,000 nuclear weapons in the hands of nine
countries, it becomes clearer still that this is a problem that cannot be ignored any
longer, for it risks spiraling out of control and reducing our world to ashes.

The combined threats presented by global resource scarcity, climate change and rapid
population growth are both serious and imminent, and those interested in learning
more about their impact are encouraged to review The Coming Resource Crisis, a paper
included within the Appendix of this writing. But Universal Energy is not about
reviewing crises, its about solving them. And it starts with a story of technology.

Ever since we invented machines that could perform at far greater levels of speed and
precision than human hands, weve rapidly increased our capabilities to build
advanced systems. Indeed, a smartphone-wielding bartender in Mexico City today has
instantaneous access to a wealth of information the worlds governments, universities
and corporations combined didnt have when The Simpsons first aired. It was technology
that made this possible. Yet weve now reached the point where large, highly
sophisticated systems that 30 years ago took months to build can today be
manufactured on automated assembly lines in a matter of hours. This is an ability that
can be extended to systems that generate energy, one that becomes all the more
promising as weve recently crossed barriers that allow us to generate energy to higher
scales than we ever could previously.

But what if we took things a step further? What if instead of just mass-producing
energy technologies, we also built them to work together by design? In the era of the
smartphone, why not have a smart power plant or smart power grid? What happens
when we take all of the technical advancements weve made recently within and
outside of energy production, and combine them into one intelligent system? Universal
Energy allows us to find out.

Universal Energy is a framework of the best energy technologies we presently have

available, designed to work together in a way that makes them greater than the sum of
their parts in terms of output, efficiency and effectiveness. In doing so, this allows us to
lower the price of energy to an extent in which it becomes feasible for the first time in
our history to synthetically produce critical resources on an unlimited scale. By
unlimited, I mean that no matter how many resources one consumes, the framework
will always produce more, faster than the rate of consumption a feature by design.

Universal Energy outlines how this framework can deliver that goal, and provides
blueprints that illustrate how it can be used thereafter to solve the pressing problems of
our time.

This framework is environmentally friendly.

This framework is affordable.
This framework is sustainably powered.
This framework is built with technologies that exist today.
And, this framework can be deployed anywhere in the world to functionally end
resource scarcity and end it with finality.

If we can unchain ourselves from this eternal problem of resource scarcity, it frees up
immense resources we currently devote to putting out its fires. Those become resources
that we can instead invest in social advancement, powering the upward spiral. This
doesnt cost us more; it costs us less. Not just in terms of money, but also in terms of
concentration. If our society isnt constantly made to surf this tidal wave of social
maladies, we can devote greater attention to improving our lives and our civilization as
a whole.

With these technologies and the framework they power, we can change the world.
And we can build it better, stronger and brighter than before.

You never change things by fighting the existing reality. To change something, build a new
model that makes the existing model obsolete.

- R. Buckminster Fuller

Chapter One: Mindset

Since the beginning of our time, civilization and resources have been inexorably linked,
powering and making possible every part of our existence. As our existence has
evolved and expanded, so has our relationship with resources and our needs, seeing
them grow in ever-greater importance to the advanced, global economies we seek to
continue building.

Resources have been the key to nearly every advancement we have ever made, and
conversely their scarcities have been the cause of our most pressing social problems. As
a result of this relationship, the societies and civilizations of mankind have all
attempted to mitigate scarcity through varied constructs: laws and social policies;
ideologies and political movements; technological innovations; the rewriting of
borders; and, ultimately, war. Nearly always, it all has come down to managing
resource scarcity. Yet these constructs have almost universally sought success by
addressing the varied symptoms of resource scarcity, rarely the core problem itself.

It is for that reason why I believe they have failed.

A true solution doesnt cure the symptoms. It cures the disease. In the case of resource
scarcity, our cure comes not through social constructs, but through technology and
more importantly, how we can use it.

Technology provides the solution to resource scarcity because it is first and foremost a
catalyst to supply, as technology allows us to extract resources within the world around
us while also making their extraction more efficient and less expensive. Throughout
history, we have developed and depended on technology to solve resource shortages,
leading to breakthroughs that have changed the world, even if we didnt realize it at
the time.

For example:

The years following WWII gave rise to the threat of the first global resource crisis:
food scarcity. Humanity was rapidly expanding in population and feeding the
planet was becoming more difficult. This crisis was detailed in The Limits to Growth,
a 1971 paper that predicted catastrophic consequences for humanity should it fail to
curb population expansion. These predictions were well reasoned, yet they never

came to fruition. Why not? Technology came to the rescue through industrialized
farming techniques, high-performance fertilizers and genetically modified crops, all
of which increased food production to the extent that Earth now supports 7.4 billion
people and counting.

In the 1800s, aluminum was extremely rare, considered among the most valuable
metals in the world. Today we throw it in waste receptacles. What made the
difference? A method called electrolysis, which allowed us to inexpensively extract
aluminum from its naturally occurring form, bauxite. This method made aluminum
extraction easy and inexpensive, dropping its cost almost to the extent of
irrelevance. Next time you throw away that soda can, though, realize that not 150
years ago it was worth its weight in silver.

The need to obtain water by traveling to a location and carrying it back used to be a
massive time expenditure for everyone within society, a problem that still exists
within much of the developing world. Yet for the developed world, the invention of
modern plumbing brought water to us on-demand. This collectively saved people
trillions of hours in free time and removed a major impediment to economic

Sugarcane was introduced to Mediterranean regions around the 7th century and
thereafter remained a major luxury commodity. As a valuable cash crop, sugar
was heavily taxed and was a revenue source for government, making it a driver of
the slave trade. Yet when technology introduced the steam engine and methods of
vaporization in the late 1800s, the cost to refine sugar plummeted to less than 5% of
its former price. It is now ubiquitous in most foods today.

In each of these examples, a once-scarce resource was made both abundant and
inexpensive as a function of technology, for technology has the unique ability to
expand the scale of resource production while also lowering costs. But in the past,
technology only really improved our ability to extract resources that were naturally
present, which over time has proved to be unsustainable as natural resources supplies
eventually dwindle.

Yet what if we shifted gears and developed systems that could instead
synthesize resources? Today, advances in technology allow us to do just that.

Of the breakthroughs weve seen recently, many have occurred within computer
modeling and information technologies, polymer and material sciences and large-scale
manufacturing. This has presented a critical mass of technical capabilities in some

important areas: automation and precision of construction, speed and depth of
computer processing, quality assurance, strength of materials and virtual modeling that
allow us to engineer solutions to problems on much larger scales than we could

To put this in perspective, most nuclear power plants in the United States were built
before 1978. That means they were designed and built without the aid of a calculator. The
same is true with most power plants and larger-scale social infrastructure: the bridges,
skyscrapers and stadiums of our society. Yet today, we have the capability to design a
nuclear reactor on a computer and build it on an assembly line, like one does a toaster.

To be sure, we can build many things with these increased capabilities. But the starting
point is to build a system that can sustainably produce resources. And not just any
resources, but the five most critical:


Above all else, these are the most important resources for our civilization to operate.
These are the resources that are so essential to powering our advanced economies, and
these are the resources most likely to spark conflict when they become scarce.

The purpose of Universal Energy is to act as this resource-producing system, and it

works by leveraging three critical concepts: standardization, modularity and, with
these two in place, cogeneration.

To explain how, well take a minute to first explain what these concepts are and why
theyre important. For reference, standardization is a way of building something to a
universal standard thats widely adopted society-wide (for example: all of your
electronic devices are charged by connecting a standardized type of plug into a
standardized type of wall outlet). Modularity is a way of building something that
features the ability to rapidly change configuration or scale in sophistication and size
using a standardized means (think Legos or Lincoln Logs).

Standardization and modularity allow us to identify a superior technology and deploy

it in a way that can be mass-produced, providing easy replacement of parts and driving
down costs. Accordingly, recent advances in technology enable us to apply these
concepts on larger scales, especially within energy generation.

An example: most power plants today are constructed ad-hoc, meaning they were
designed and built as unique systems. They may have a standardized pump, width of
walkway or type of wiring, but no plant is identical to another. Moreover, our energy
infrastructure and civilization as a whole is powered by a hodgepodge of sources:
petroleum (oil), coal, solar, wind, uranium, natural gas, geological heat, hydroelectric,
corn ethanol and biomass.

Our energy production systems are arbitrarily powered. They do not work with each
other they dont even talk to one another. And they are built as unique entities with only
minimal standardization and even less modularity in design.

These factors make energy systems highly expensive to build and operate. It further
prevents us from rapidly scaling them in size, which limits the availability of energy to
society and thus raises its price. There is a better way.

By designing the technologies within Universal Energy to incorporate modularity and

standardization, we get to leverage a concept called cogeneration, which is to use the
waste energy of one technology to power something else. For example: diverting the
waste heat energy from a coastal power plant to power a nearby facility that desalinates
seawater into fresh water. Desalination is presently a costly and energy-intensive
process, yet when its powered primarily by waste energy, energy requirements and
costs drop dramatically.

When each technology within the framework is designed to easily connect to each other
and work together from the ground up while maintaining the ability to rapidly scale in
size on-demand, it empowers us to push the bounds of cogeneration to their extremes.
And that presents circumstances where our energy infrastructure can produce both
energy and resources in the same footprint under unrivaled efficiency.

This will do to energy and resource production what technology has done to all other
consumer products: increase availability, lower prices and advance quality over time
by scaling the learning curve (the idea that improvements and cost reductions increase
with time and experience). The tricky part to making this approach successful is
identifying technologies that can generate energy in the way we need them to, which
well define as meeting the following criteria:

1. The technology must be able to generate immense energy at low cost. In order to
synthesize enough resources to satisfy all of our requirements, well need to
generate an effectively unlimited supply of energy. This means that regardless of
how much energy is consumed it will always be regenerated at a rate faster than

that of consumption. This requirement will set an initial target of 300% of our
national annual electricity consumption (3,760 terawatt-hours as of 2015), coming to
a total of 11.28 trillion kilowatt-hours generated annually.

The cost of this energy is intended to be no more than two cents per kilowatt-hour,
down from todays 10.44-cent average. Reaching this target would provide enough
energy at a low enough cost to allow large-scale synthetic production of
civilizations five most critical resources. (For those inclined, a detailed pricing
model is included in this writings Appendix).

2. Its energy source must be abundant and long-term sustainable. If an energy

source and its corresponding extraction methods arent sustainably available after
widespread adoption, well eventually find ourselves in the same position we are in
now. For this reason, any technology we employ to provide Universal Energy will
need to be viable for the long term, quantified for our purposes to be at least 100,000

3. The technology must be safe and environmentally friendly. The energy

production system, its fuel and any waste must present negligible environmental
impact and must further be carbon-neutral, meaning it does not emit carbon dioxide
and adversely impact the climate. Additionally, it must not leave toxic waste that
cannot be rendered inert in a short time period.

4. The technology must be affordable to develop and use. Whatever benefits are
brought by advances in energy technology will be irrelevant if they are not
affordable, presenting the requirement for all energy-generating systems to have a
realistic price tag.

5. The technology must be flexible in location. There are energy technologies that
can fit the previous four requirements, but many can only function in limited areas
and thus cannot be deployed anywhere with certainty. Universal deployability is
vital for inclusion into a modular and standardized energy framework, especially
since many of the consequences of resource scarcity exist in areas that are
geographically remote and/or with rugged terrain.

6. The technology must be deployable rapidly. Considering the state of the world
today, its tough to see how well be in any sort of good shape in 20-30 years if
resource scarcity isnt solved in the next 5-15. The solution to this problem needs to
get here soon, or it wont matter in the end.

Universal Energy meets these requirements through a strategic deployment of four
technologies. They include thorium, solar and wind, all tied together through a
revolutionary use of water. Before shifting gears to how they all work together, well
briefly go over how they integrate within Universal Energy.

Liquid Fluoride Thorium Reactors (LFTR) An advanced type of nuclear reactor that
avoids nearly all complications with our current approach to atomic power. Instead of
being fueled by uranium in a traditional sense, they instead use the element thorium to
generate energy, making LFTRs far cleaner, far safer and far more resistant to
weaponization than reactors used today. Thorium is as common as lead, making it
thousands of times more abundant than uranium and thus long-term sustainable. It has
a minimal and short-term waste footprint. And because thorium reactors can be built to
a small size and dont operate under extreme pressure, they can cost a fraction of other
approaches to atomic energy.

Solar Infrastructure The potential presented by solar power is arguably limitless, yet
a common problem with solar today is that its use is piecemeal its adopted by
individual businesses, landowners or cities as they wish, but theres not really a
standardized method to deploy it nationwide on a large scale. Yet by integrating solar
cells directly within public infrastructure like roads, bridges, water pipelines and rail
lines, it gives us a unique location to install standardized forms of solar panels that
critically lack the requirement to purchase expensive additional land.

Not only does this allow us to use the high costs of constructing public infrastructure to
offset the costs of solar power (in certain cases, building a solar road in a city can cost
less than building an asphalt road), it helps build a smarter and more resilient electric

Water and Hydrogen An unlimited supply of inexpensive energy permits us to

extract both fresh water and hydrogen fuel from seawater on a large scale. Our ability
to do this today is well-known to both science and industry, its just too expensive with
current energy costs. Universal Energy changes that, and allows us to desalinate
billions of gallons of fresh water largely through the waste heat energy generated by
LFTRs water that is in turn transported by a component of Universal Energy thats
referred to as the National Aqueduct.

The National Aqueduct is a proposed nationwide delivery system for desalinated fresh
water thats built alongside the pre-cleared and publicly owned land at the sides of our
national highway networks. Yet delivery is only one of its features. This system is

proposed to be built through prefabricated, modular pipelines that have solar panels on
top of them and hydroelectric turbines within them, allowing water pipelines to
passively generate immense levels of energy. Most critically, any excess energy can be
used to keep billions of gallons of water at higher temperature, presenting the ability to
use our water supply as a giant battery by way of thermoelectricity (using heat energy
to generate electricity).

Supplemental Wind unlike solar power, wind can work at any time of day or night.
And although more standardized than solar, winds implementation requires equally
as much land if not more so and presents unique environmental complications (such as
greatly increased bird deaths). However, as wind can be deployed effectively
anywhere, Universal Energys strategy is to utilize wind specifically in areas where it
can either provide a supplemental bonus, make up for a lack of suitability for other
technologies, or just as importantly, generate excess energy to heat water within the
National Aqueduct.


Universal Energy deploys these technologies through a unique strategy. As an
inexpensive, safe, eco-friendly and immensely powerful source of both electricity and
heat, Liquid Fluoride Thorium Reactors provide the backbone of the framework, the
potential of which is significantly extended by solar power integrated within both
municipal infrastructure and the National Aqueduct. The excess electricity and waste
heat energy generated by LFTRs are leveraged by water desalination facilities to
produce large amounts of both hydrogen fuel and fresh water with little overhead. This
fresh water is transported nationwide within the National Aqueduct to generate
energy and serve as a massive eco-friendly battery for renewable energy. Wind is
deployed lastly to supplement energy generation nationwide while also providing an
additional source of energy to make the National Aqueduct even more powerful than it
is by itself.

With these systems in place, we can dramatically increase our capability to generate
energy on a nationwide scale, energy that can be harnessed to not only power extensive
indoor farming networks, but also the synthetic production of building materials for
construction and commercial industry. Once complete, this framework gives us nigh
unlimited supplies of energy, water, food, electricity, fuel and building materials.
Combined, this enables us to advance our capabilities within manufacturing and

infrastructure to uncharted levels, so that we can build our civilization to ever-greater
heights in a world spared of the constraints of finite energy and limited resources.

Universal Energy seeks to change the fundamental rules of our existence and
systematically dismantle the challenges that have held us back for millennia. If
accomplished, we can position ourselves for a future one shared by our children and
generations hence where we not only survive on this planet, but permanently thrive.
And as we walk this path of continual upward advancement, we can reach goals that
were never before achievable, and bypass the limitations of our world as we once knew

From here, well go through the technologies within Universal Energy in order, so we
can see how they make this all possible while further revolutionizing of way of life.

In the years since man unlocked the power stored within the atom, the world has made
progress, halting, but effective, toward bringing that power under human control. The challenge
may be our salvation. As we begin to master the destructive potentialities of modern science, we
move toward a new era in which science can fulfill its creative promise and help bring into
existence the happiest society the world has ever known.

- John F. Kennedy

Chapter Two: Thorium Power

In this chapter, well go over the first technology that provides the electricity-generating
base of the Universal Energy framework. Its an advanced type of nuclear reactor
known as a Liquid Fluoride Thorium Reactor, and what makes it special is the fact that
its both ultra-clean, ultra-powerful and allows us to avoid nearly every complication
we face with nuclear power today. Well call it a LFTR for short, and take a minute to
review its capabilities in terms that dont worry! arent too technical. After, well
shift gears to the other technologies that work alongside this reactor to generate enough
energy to produce critical resources at acceptable costs.


To put it mildly, nuclear power does not have a ton of public support today. And its
understandable. Considering the meltdown and radiation leaks at Fukushima, the
unrivaled toxicity and longevity of nuclear waste, and the fact that nuclear power
plants can also be used to make nuclear bombs, much of the ire is justified. But a point
of importance is that these problems have very little to do with nuclear power as a
concept. These problems, rather, stem from how nuclear power plants are constructed
and, more importantly, how they are fueled. A LFTR is fueled by the element thorium
as opposed to uranium to provide clean and safe nuclear power that avoids most
obstacles to nuclear power today. Heres a short list of its benefits:

LFTRs are highly efficient, hundreds of times more so than Pressurized Water
Reactors (the uranium-fueled reactors that we use today).

LFTRs are extremely safe. Because their fuel and reactant is liquid and they are
not pressurized, it is physically impossible for the catastrophic results of a
traditional meltdown to occur.

LFTRs produce far less waste than Pressurized Water Reactors and can also
consume nuclear waste and weapons-grade nuclear material as fuel. Of what
small amounts of waste remain, it takes only decades for it to become non-
radioactive, as opposed to thousands of years with uranium-fueled reactors.

The primary fuel supply of LFTRs, thorium, is highly abundant about as
common as lead, making it thousands of times more plentiful than fuel-grade
uranium (only about 0.7% of all uranium on Earth).

The thorium fuel cycle in LFTRs is prohibitively difficult to weaponize, and even
if one was able to make a nuclear weapon from the reactor it would likely be
useless to deploy in a conflict.

As a result of their efficiency and safety, LFTRs can be much smaller than
Pressurized Water Reactors, around the size of a house as opposed to a multi-acre
compound that requires a large buffer zone in case of emergencies.

These factors make LFTRs far less expensive to build than Pressurized Water
Reactors, and due to their small size, allows them to be mass-produced on
assembly lines in a modular and standardized capacity, meaning that nuclear
reactors can become iterations of a product model as opposed to custom-built

LFTRs are superior to todays Pressurized Water Reactors in most every way they could
be and their proven designs have been known to science for decades. But that prompts
a natural question: why arent we using them today? To answer that, well need to
provide some background that is easier to understand by first reviewing a few terms
surrounding atomic energy. I promise this isnt a textbook! Its just a quick refresher if
youre not familiar with nuclear power (or youre like me and slacked off in high-school
science classes).

Atom: the building block of matter and everything we

see and touch. Atoms generally have three types of
particles within them. The center of the atom houses the
nucleus, which is comprised of a given number of
protons and neutrons, and is orbited by negatively
charged electrons.

The different elements in the world are made up of

atoms, and each element has a specific atomic
arrangement of these particles, as shown in the periodic table of elements. Elements and
the nature of their atomic composition are the basis of all chemistry and nuclear science.

Radioactive decay: the process in which an unstable atom spontaneously emits
radiation in the form of atomic particles or energy. An element or substance that
naturally undergoes radioactive decay is considered to be radioactive.

Isotope: an unstable variant of an element, usually as a result of radioactive decay

and/or something called transmutation (explained next). Isotopes have number
designations reflective of their atomic composition. For example: uranium-233 and
uranium-235 are isotopes of the element uranium.

Transmutation: the process in which one isotope of an element becomes an isotope of

another element through a nuclear process. For example: thorium-232 becomes
uranium-233 inside of a LFTR after absorbing a neutron.

Fission: the splitting of an atoms nucleus, releasing tremendous energy and fission
products (usually radiation + isotopes of other elements). For example: uranium-fueled
power plants today work by using a neutron to split the nucleus of uranium-235 into
kryptonium-92 and barium-141.

Fusion: the joining of atomic nuclei together to form a new element, releasing more
energy than even fission. For example: fusing tritium and deuterium (isotopes of
hydrogen) into helium, which is how our sun works.

Fissile fuel: an isotope of an element that can undergo fission directly inside a reactor.
Uranium-233 and uranium-235 are fissile fuels.

Fertile fuel: an isotope of an element that cant undergo fission directly, but can if
transmuted into a fissile fuel. Thorium is a fertile fuel.

Breeding: a process in certain reactor designs that uses transmutation to transform a

fertile fuel into a fissile fuel. Any reactor that undergoes the breeding process is
considered a breeder reactor. LFTRs are breeder reactors.

Pressurized Water Reactor (PWR): 1950s-era reactor designs that use highly
pressurized water to help regulate and make possible a fission reaction inside a reactor
core. Pressurized Water Reactors use solid fuel, and are the most common nuclear
reactors operating today.

Molten Salt Reactor (MSR): advanced reactor designs that use a special type of non-
radioactive salt that becomes liquid at high temperatures to act as both a moderator for

the reactor and a carrier mechanism for nuclear fuel. They operate at standard
atmospheric pressure and have a liquid fuel supply. MSRs can be fueled by most any
nuclear material, similarly as breeder reactors. A LFTR is a highly efficient form of a
Molten Salt Reactor.

With these terms defined, well from here go over a bit of our history with atomic
energy specifically why we arent using LFTRs to generate power today.


Most nuclear reactors, including those within the United States, are fueled by uranium-
235, an isotope representing less than 0.7% of all naturally existing uranium on Earth.
Uranium-235 is a fissile fuel, meaning that the possibility exists for its atomic nucleus to
split into isotopes of other elements if hit by a fast-moving neutron, releasing levels of
energy that are millions of times greater than any chemical fuel source.

For comparison: the energy released by burning a molecule of methane is 9.6 eV

(electron volts). The fissioning of a single uranium-235 atom releases 200 MeV (million
electron volts) of energy. Thats a big difference.

For nuclear fission to work for energy production, it involves a concept known as
criticality, a threshold in which there is enough fissionable material present for the
reaction to sustain itself (a critical mass). As it exists in nature uranium is not capable
of sustained fission, yet the isotopes uranium-233 and uranium-235 are if enriched to
sufficient percentages within a fuel supply. If the reaction is sustained in a controlled
environment like a reactor, it produces energy over long periods of time. This heats
water, which creates steam that turns a turbine and generates electricity.

But during World War II, U.S. government scientists discovered that certain fissile
materials had a unique property: if enriched highly enough, it could then reach a super-
critical state. And if it were to be rapidly bombarded with neutrons, it could create a
nuclear detonation the most powerful man-made force in existence.

As it was two of those detonations that ended World War II, the significance of atomic
weaponry could not be discounted, especially once the Cold War unfolded in earnest.
Thus, as civilian nuclear power developed as an energy source, so did the development
of nuclear arms and their delivery mechanisms. These two sectors would eventually
converge to ensure our continued use of uranium-235 as fuel.

For several reasons, uranium-235 is a less-than ideal fuel for nuclear power. For one, its
very reactive, akin to filling a cars gas tank with jet fuel. Its also primarily employed
within Light Water Reactors (and to a lesser degree, Heavy Water Reactors). Both Light
and Heavy Water Reactors are Pressurized Water Reactors, which pressurize water to
a level equivalent to a mile below the oceans surface so that it stays liquid at 626F
(330C) in order to help regulate the nuclear reaction. So, why are these systems not
optimal for societys energy needs? The issues essentially boil down to risk
management and efficiency.

Pressurized Water Reactors were invented in the 1950s and their designs have remained
largely unchanged since then. As Light Water Reactors are by far the most common
variants, well focus primarily on them. They are fueled through rods filled with
uranium-oxide pellets (that can contain no more than 4.5% fissile material). Those fuel
rods must be replaced every 18 months along with the reactor core. This requires
shutting down the reactor; both the fuel rods and reactor core remain radioactively
contaminated. If a reaction was to run awry and the fuel rods couldnt be extracted,
they could potentially melt and pool in the water-filled reactor core, generating extreme
heat to the extent that a steam explosion could occur and spread radioactive
contaminants over an area. That event is called a meltdown and is effectively what
happened in Chernobyl. This is very bad.

These risks require Light Water Reactors to be built with extensive safety features:
containment domes of steel-reinforced concrete several feet thick, massive cooling and
pressurization apparatuses and redundant mechanisms that engage in case any of these
systems were to fail. Light Water Reactors also must be built in sparsely populated
areas with large buffer zones in case an area had to be rapidly evacuated in the event of
a meltdown. All of these factors make Light Water Reactors extremely expensive to
build and furthermore carry catastrophic consequences if anything was to go wrong. So,
why are we using uranium-235 as nuclear fuel within Light and Heavy Water Reactors,
especially since better alternatives exist?

Simply stated: because it was discovered that as part of the uranium-235 fuel cycle, its
possible to reprocess spent fuel rods to artificially produce plutonium-239, and you
need plutonium-239 to build hydrogen bombs.

Nuclear bombs come in varied shapes and sizes. Building a basic weapon is fairly
straightforward if one has the materials. The general idea is to find a way to rapidly
combine highly enriched fissile material together into a critical mass (explosives usually

do the trick), introduce a neutron source to spark a fast fissile chain-reaction, and bam!
Youve got yourself a nuclear bomb.

The following image illustrates a gun-type assembly weapon, the bomb the United
States dropped on Hiroshima in 1945. As you can see, its fairly conceptually simple:

However, building a very powerful bomb that is still small enough to fit into a warhead
requires an implosion-type assembly design, a means of reaching a critical mass by
imploding a larger sphere into a smaller sphere by means of explosives (read: very
difficult). Implosion-type devices can not only be much smaller than gun-type devices,
they are also more efficient and thus more powerful.

But the thing with uranium-235 is that it isnt very effective in implosion-type devices
due to a high critical mass requirement a trait not shared by plutonium-239. Yet
plutonium-239 does not exist naturally in nature, leading our government to rely on the
reprocessing of spent uranium-235 fuel rods in Light Water Reactors to source it. Once
acquired, we then took weaponization a massive leap further.

As implosion devices can be built to small size (basketball or smaller), we discovered

that the energy released from its detonation inside a bombs casing can be harnessed to
facilitate the fusion of isotopes of hydrogen, providing a dramatically more powerful
explosion. Thus, the hydrogen bomb was born.

The following image shows the basic stages of a thermonuclear detonation, employing
what is known as a Teller-Ulam device design:

A: Warhead in normal state. Primary (implosion-type fission bomb) on top. Secondary

(the cone-shaped thing), is fusion fuel. Its comprised of something called lithium-6
deuteride thats wrapped around a sparkplug of plutonium, and both are suspended
in a plastic foam (polystyrene).

B: The bomb is triggered. The high explosive charges of the primary blow, and
compress the plutonium core into a super-critical state. A fission detonation occurs.

C). The fission detonation of emits high levels of X-ray energy that reflect inside of the
bomb case. This irradiates the polystyrene foam.

D). The heat and X-ray irradiation causes the polystyrene foam to turn to superheated
plasma, which expands massively and compresses the secondary as the excess neutrons
inside of the bomb case causes the plutonium sparkplug to fission.

E). At such extreme heat and pressure, the lithium-6 deuteride separates into tritium
and deuterium, isotopes of hydrogen. Under these circumstances, they fuse together to
form helium in a reaction thousands of times more powerful than an atomic bomb
millions of tons of TNT.

By using the heat, radiation and pressure of an implosion-type bomb, thermonuclear

weapons fuse isotopes of hydrogen together to create helium, forming a second sun
anywhere one is detonated. With potential yields in the megatons (millions of tons of
TNT), bombs can be built that make the ones dropped on Japan seem like firecrackers.

Yet as plutonium-239 is required to produce a thermonuclear detonation, without it,
none of the thousands of hydrogen bombs in the world could exist. And without using
uranium-235 as a nuclear fuel, there would not be a reprocessing cycle in which to
efficiently produce plutonium-239.

So today, we fuel our power plants with nuclear dynamite that creates waste products
that last for thousands of years and rank among the most toxic substances in existence
for the primary purpose of building nuclear arsenals. That is the dirty secret of our
approach to nuclear energy. We have corrupted the most powerful energy source that
we have ever discovered and ignored its true potential to make a better world, all so we
could feed the war machine at the cost of a world placed in perpetually grave jeopardy.

In the face of such reckless disregard for our civilization and the lives of the people
within it, words seem insufficient to describe what thoughts come to mind, especially
since most of them are unfit for print. However, they should come with the realization
that in spite of our current approach to nuclear energy, we do not have to keep doing
this anymore. Thats where thorium comes in.

Thorium-232 in LFTRs can be compared to our current use of uranium-235 in
Pressurized Water Reactors in the sense that they are both sources of nuclear energy.
But in terms of operation and effectiveness, in reality it would be like comparing a
spinach salad to a 4,000-calorie cheeseburger because theyre both food items. This is
because thorium fuel and the reactor designs associated with it systematically avoid
nearly most every complication with Pressurized Water Reactors, while at the same
time providing a long list of benefits.


Although thorium attributes its name to the Norse god of thunder because of its silvery
appearance, thorium isnt as reactive as its namesake suggests, ranking among the least
reactive radioactive elements. It is safe to handle in its raw form and by itself isnt
particularly remarkable. However, its natural lack of reactivity and radioactivity makes
it ideal as a fuel source if implemented in specific types of reactor designs. Cue LFTRs.

As a LFTR is a type of Molten Salt Reactor (MSR), it powers nuclear fission through a
liquid core that is self-regulating, as opposed to solid fuel rods like Pressurized Water
Reactors. MSRs undergo fission at normal atmospheric pressure no water needs to be
pressurized to keep the reaction regulated. And because MSRs use liquid fuel as opposed
to solid fuel rods, the reactor is always designed to be in a meltdown state except in
this case that's a good thing.

Meltdowns are problems with Pressurized Water Reactors because the runaway
reaction cant be controlled, which leads to catastrophic results. But a MSR is designed
to operate in those conditions naturally. In the case of a LFTR, its one of the few
circumstances in which thorium is sufficiently reactive and even then, its a slow and
steady reaction at that. If your mind goes to the tortoise versus the hare fable, youre on
the right track. But where LFTRs really shine is through efficiency, safety and cost.

In concept, the reaction works like this: thorium-232 and uranium-233 (the kind that's
difficult to use in bombs) are dissolved into molten lithium-fluoride salts and fed into
the reactor. The purpose of the molten salt is to act as a carrier for the thorium fuel and
as a catalyst for the reaction. This keeps it at a high heat, but at the same time both
moderates the reactors temperature and refuels the reactor over time through breeding
(in this case, transmuting fertile thorium-232 into fissile uranium-233).

This self-regulating reaction is very efficient and long-lasting, which through a series of
heat exchangers, heats an inert (non-reactive) gas that is sent through turbines to
generate electricity. The result? Leftover heat that can power auxiliary functions such as
water desalination and hydrogen production on-site (which well discuss shortly).

The benefits of the system come from this concept of breeding. All LFTRs are breeder
reactors, which is a term used to describe a nuclear reactor that is capable of producing
its own fuel supply by transmuting fertile fuel into fissile fuel.

A fissile fuel, such as uranium-233 and uranium-235, can undergo fission inside a
reactor; however, fertile fuels such as thorium-232 can do so only once subjected to
transmutation, which breeder reactors make possible by design.

A LFTRs core fissions uranium-233, releasing heat, energy and three neutrons (one of
which is absorbed by a graphite moderator inside the reactor).

These remaining two neutrons combine with the fertile thorium-232 that is suspended
within the liquid molten salt to form uranium-233 through transmutation. One

transmuted, its fed back into the reactor core for sustained fission and thus power. If
its still a bit confusing, the following image helps explain a little further.

(Note: all LFTR/thorium-specific figures for the rest of this chapter, unless otherwise
stated, were sourced from Thorium, Energy Cheaper than Coal, Robert Hargraves, pages

1. Thorium-232 is fed in into the reactor.

2. In small doses, the fertile thorium-232 fuel supply absorbs excess neutrons from
the reactor core. Upon doing so, its transmuted into fissile uranium-233.
3. The uranium-233 is fed back into the chemical separator, and is extracted.
4. Once the uranium-233 is extracted, its fed into the reactor core for sustained
5. Once the reactor has consumed all fissile fuel, the chemical separator extracts the
remaining waste fission products far less volume than Light Water Reactors.
6. Heat radiates from the reactor thats used to generate electricity, and later, both
fresh water and hydrogen fuel from seawater.

As LFTRs can reprocess and resupply their own fuel from the waste products of the
original fission reaction, in addition to a hefty supply of fertile thorium (known as the
blanket), LFTRs ensure abundant fuel for long periods of time at high efficiencies.

How efficient? One ton of thorium-232 in a LFTR outputs the energy equivalent of 250
tons of uranium-235 in a traditional Light Water Reactor, or 4.16 million tons of coal in a
coal-power plant. The average efficiency of power plants today is around 35%. A LFTR
is 54% efficient and uses 99% of its fuel.

Of that 46% efficiency loss, it is primarily in the form of heat, which can be re-captured
through other processes to produce resources. The breeding process can allow the
reactor to continually reprocess and produce its own fuel from thorium for up to 30
years without replacement, whereas if you recall with Light Water Reactors the fuel
needs to be replaced every 18 months.


Even at the onset, the superiority of thorium compared to uranium-235 in a Pressurized

Water Reactor is clear. The benefits continue:

Safety. Because LFTRs operate at normal atmospheric pressure, theres far less that can
go wrong and even if something were to the remedy is simple and effective. As the
reactant is in liquid form, if it were to get too hot a drain valve at the bottom of the
reactor opens and channels the liquid reactant into smaller storage tanks by force of
gravity, where there would be insufficient mass to sustain the reaction.

This measure makes it physically impossible for a LFTR to melt down in the traditional
sense, even under catastrophic circumstances. Even if a LFTR were targeted by a
terrorist attack and blown up, the liquid reactant would flash-freeze into a solid once
exposed to the open air. Additionally, the amount of radioactive material present in
LFTRs is substantially less than with Light Water Reactors and it is also short lived
remaining radioactive for decades as opposed to millennia. These are benefits that are
in no way shared by our current approach to nuclear energy.

Thorium is plentiful and sustainable for long-term use. About as common as lead, the
global supply of thorium is more than 400% greater than all forms of uranium (where
only 0.7% is useful for power). There is enough in the United States alone to power the
country for the next 10,000 years. Combined with worldwide reserves, there is enough

to power the planet for hundreds of thousands of years. Thorium is also a common
byproduct of rare earth metal mining, presenting sustainable opportunities for easy

LFTRs have a greatly reduced environmental footprint. By virtue of the molten salt
reaction, the radioactive fission products inside a LFTRs core are naturally consumed
by the reactor itself as fuel. With Light Water Reactors, they are absorbed by the reactor
components and become radioactively contaminated, requiring sophisticated (read:
dangerous and expensive) decontamination processes.

Additionally, LFTRs can consume varied types of nuclear material as fuel, including
weapons-grade fissile material and even the nuclear waste generated by Light Water
Reactors acting as nuclear garbage disposals that slowly and surely generate
electricity for decades. And of what radioactive waste remains once the reaction
consumes all fuel, the physical amount is less than 1/1000th of Light Water Reactors. It
decays quickly, with the most toxic radioactive isotopes having a half-life of only 30.17
years, meaning that a supply of radioactive waste from a LFTR would become less
radioactive than natural uranium within a period of 300 years or less. Radioactive waste
from Light Water Reactors can last for more than 10,000 years.

LFTRs significantly reduce the possibility of weaponization. The weaponization of a

nuclear reaction is unique from a physics standpoint, for only uranium-235 and
plutonium-239 have been known to make a militarily effective bomb. However, while
not to the scale of either it is technically possible to create a bomb using material
produced in LFTRs, a point that warrants specific consideration. It has been proposed
that a LFTRs fuel cycle could be hijacked to extract uranium-233 and neptunium-237 to
make a nuclear weapon, so some make the argument that perhaps LFTRs aren't as safe
as originally thought. Yet their arguments fail to fully consider a few crucial factors as
there are reasons why neither of those isotopes have ever been used to make deployable
weapons. Chief among them are:

Purification difficulties and inherent dangers. As part of the breeding process to

transmute uranium-233 from thorium, an invariable amount of uranium-232 is
produced as well. Uranium-232 decays with high energy gamma radiation, which
although safe in the confines of a reactor, makes it too lethal to handle by human
hands a trait not shared by other weapons-grade nuclear material. Without the use
of remote/robotic equipment to manipulate and enrich the material, any individual
seeking to build a nuclear weapon with uranium-233 would likely not survive to see
it completed.

Additionally, gamma emissions from uranium-232 severely damage electronics,
rendering devices incapable of facilitating the precision detonations necessary to
force nuclear material into criticality and initiating fast fissile reactions. The presence
of these factors make it unlikely that a weapon fueled by uranium-233 would ever be
successfully built let alone deployed with military effectiveness.

In certain cases, it is technically possible to chemically purify uranium-233 of the

uranium-232 contaminant. But chemical purification to this degree requires highly
expensive and purpose-built infrastructure that is unobtainable to entities other than
states with active nuclear programs infrastructure that is also detectable by
international monitors. At the levels of sophistication needed to accomplish this feat
without detection, making a bomb with traditional nuclear materials becomes
possible regardless.

It's also been theorized that if the LFTR is using something called a fluorinator,
neptunium-237 can be extracted via a chemical process. Although nobody has ever
made a bomb with neptunium-237, it is technically possible due to its potential to
undergo a fast fission reaction. However, the critical mass requirement for
neptunium-237 is roughly 60 kilograms, making it higher than even uranium-235,
which would make a militarily effective device less practical to build even if the
research and expertise to weaponize neptunium-237 were present.

Even so, most LFTR designs include mechanisms to intentionally contaminate the
reactant with materials that would hinder weaponization from the start. In short: if a
state is able to make a bomb from the thorium fuel cycle, they dont need thorium to
make a bomb in the first place.

And regardless, its still crappy bomb fuel. Even if they could be efficiently
extracted, uranium-233 and neptunium-237 are ineffective fast-reacting fissile fuels
compared to highly enriched uranium-235 and plutonium-239. There are no known
nuclear weapon tests that have ever used neptunium-237, and only two that have
used uranium-233.

Both devices were largely considered failures due to weaker-than-intended explosive

yields, respectively at 22 kt (U.S. 1955) and 0.2 kt (India, 1998) relative pittances
compared to modern nuclear weapons. Of those devices, the uranium-233 was
chemically purified (which remember is highly difficult to do) and was
complemented by plutonium-239 to increase yield. As a consequence of those

lackluster tests, there exists little research or expertise to weaponize uranium-
233/neptunium-237, nor avoid the dangers of doing so.

With these considerations in mind, its highly improbable that fissile materials
produced through the thorium fuel cycle could be used to build a bomb. All the more so
considering that the effort required to do so is on the order of extracting and purifying
fissile materials from natural sources within a state's national territory.

It is for these reasons why every state with nuclear ambitions has instead invested in
uranium-235 and plutonium-239, for their use is easier and safer than hijacking the
thorium fuel cycle to produce weapons-grade material. While this does not totally
alleviate concerns of proliferation through thorium, it does reduce them significantly,
which is by all means an improvement over our circumstances today.

LFTRs are simpler, smaller and less expensive than Pressurized Water Reactors. As
Pressurized Water Reactors have to be pressurized to 160 atmospheres just to function
(which is equivalent to a mile below the oceans surface), they require redundant
processes and complex systems to manage the reaction and ensure nothing goes wrong.
Additionally, as the radioactive fission products of the uranium-235 fuel cycle present
the potential for catastrophic environmental damage should a reactor melt down or be
destroyed through sabotage, Light Water Reactors require massive security
infrastructure. Combined, these factors cause Light Water Reactors to rank among the
most expensive and over-engineered systems on the planet:

Comparison of LFTR to Light Water Reactor fueled by uranium-235

Light Water Reactor fueled by uranium- Liquid Fluoride Thorium Molten Salt
235 Reactor (LFTR)

Fuel: Uranium-233 and thorium-232 in a

Fuel: Uranium-dioxide solid fuel rods
solution of molten lithium-fluoride salts

Fuel lifetime: 18 months. Requires Fuel lifetime: 30 years without

reactor shutdown to replace. Core + fuel replacement. Current graphite core
rods remain radioactively contaminated lifetime is in excess of six years

Fuel input per gigawatt (gW) output: 250 Fuel input per gigawatt (gW) output: 1
tons uranium-235 ton thorium-232. 250 times more efficient

Annual fuel cost for 1-gW reactor: $60 Annual fuel cost for 1-gW
million reactor: $10,000 (estimated)

Total unit construction cost: $7.0 billion Total unit construction cost: $1.0 billion*

Coolant: Highly pressurized water with a Coolant: Self-regulating with passive

graphite moderator gravity emergency shutdown

Weaponization potential: High Weaponization potential: Low

Physical footprint: 2,000-3,000 square

Physical footprint: 300,000 square feet.
feet (size of a house). No buffer zone
Requires large buffer zone
Source 1 | Source 2: Thorium, Energy Cheaper than Coal, R. Hargraves, p. 205.
*Unit cost is expected to reduce over time due to scaling the learning curve of
manufacturing. In Thorium: Energy Cheaper than Coal, Robert Hargraves estimates the
1,000th commercially produced reactor would cost 60% less than the first commercial
unit based on economic assessments of the aerospace manufacturing industry from the
University of Chicago.

As LFTRs are spared the size, expense and security requirements of Light Water
Reactors, they can be built far smaller and less expensively. They can also be built closer
to population centers (as opposed to Light Water Reactors that need to be
geographically isolated), considerably reducing the infrastructural requirements to
transmit power to electric grids.

LFTRs are the ideal stepping stone to cold fusion energy. Universal Energys purpose
as a framework is to get us to a world where unlimited energy is an everyday reality. It
accomplishes this goal through a combination of technologies because theres not one
silver bullet that gets us there today. Thats likely to change in the not-so-far-off
future in the form of fusion power.

Fusion power effectively replicates the conditions of our sun, which fuses isotopes of
hydrogen together into helium to generate immense energy. We can do this today in the
form of hydrogen bombs, but thats not exactly the kind of energy we want. Learning to
control this process and harnessing it for unlimited energy is referred to as Cold
Fusion, and doing so would no doubt be the holy grail of human achievement. Yet
scientists estimate were only within 25-50 years away from that goal. Problem is?

Humanitys problems are pressing, and we may not have the luxury of waiting that
long. Universal Energy is designed to get us there sooner, and investment in LFTRs
advances our command of atomic energy with each new reactor produced. This
accelerates investment into the nuclear energy industry altogether, which if geared
towards thorium and future fusion deployment, can get us there faster.

LFTRs can be built in a modular, prefabricated capacity. Todays nuclear reactors are
designed and constructed as unique entities, significantly increasing their total cost as
they're essentially made to order. However, recent improvements in manufacturing
today allow LFTRs to be built on assembly lines as iterations of product models,
providing two main benefits:

First: the efficiencies inherent to modern manufacturing enable us to lower how much it
costs to build things over time as more units are produced. This speaks to the concept of
scaling the learning curve, or learning ratio which is the percentage in cost reduction
each time the number of manufactured units doubles.

In computing, Moores law has shown that computer processing power at a given price
doubles every two years. In aerospace manufacturing, the reduction in per-unit cost has
been roughly 20% every time the number of produced units has doubled (R. Hargraves,
p. 221). As applicable to the manufacturing of Light Water Reactors, the University of
Chicago estimates a learning ratio of 10% in their 2004 study The Economic Future of
Nuclear Power. (R. Hargraves p. 220). As LFTRs can be built on assembly lines, their
learning ratio would likely be higher, expected to be on the order of aerospace-grade

But even at 10%, this would mean that by the time the 1,000th LFTR was constructed, it
would cost around 40% of the first commercially produced unit. Robert Hargraves, an
expert on thorium energy and the author of Thorium: Energy Cheaper than Coal (2012)
explains further:

Boeing, capable of manufacturing $200 million units daily, is a model for LFTR production.
Airplane manufacturing has many of the same critical issues as manufacturing nuclear reactors:
safety, reliability, strength of materials, corrosion, regulatory compliance, design control, supply
chain management and cost, for example. Reactors of 100 mW [megawatt] in size costing $200
million can similarly be factory produced. Manufacturing more, smaller reactors traverses the
learning ratio more rapidly. Producing one per day for 3 years creates 1,095 production
experiences, reducing costs by 65%.

This means that while the estimated price tag for a LFTR stands at $200 million dollars
currently, as more units were produced that price tag would fall over time, making
them increasingly more affordable and economically viable.

Second: manufacturing LFTRs on an assembly line ensures standardization, and

standardization provides modularity. This becomes important when building smaller
reactors because not only are smaller, modular and standardized reactors considerably
less expensive to construct, they are also easier to deploy.

If you recall, a core requirement of Universal Energy is universal deployment, for many
regions that suffer from the consequences of resource scarcity are geographically
remote and/or feature terrain hostile to the construction of something as large as a
power plant.

A smaller LFTR manufactured on an assembly line can be built rapidly and plugged
into any grid in a relatively short time period. So if for example a region needed to
quintuple its electricity generation capacity in a matter of weeks, modular LFTRs make
this possible and they make this possible effectively anywhere. Frozen climates? No
problem. Overcast climates? No problem. Desert climates? No problem. This also
would pay dividends toward disaster-relief efforts, peacekeeping missions and space

The transformational benefits of the thorium fuel cycle and the associated reactor
designs make thorium one of the most attractive energy sources we have available.
However, thorium power does require additional research to make it commercially
marketable and economically viable. While in no way diminishing its capabilities to
serve as a component of Universal Energy, this does warrant additional consideration
when planning how to move forward with its use.

A consequence of the nuclear arms race during the Cold War is that the lions share of
research and infrastructure behind nuclear power concerns Pressurized Water Reactors
and the uranium-235 fuel cycle, and minus two successful experiments in the late 1960s
domestic research on LFTRs is limited. This is beginning to change as several domestic
and foreign companies are investing deeply in LFTR technology. In 2010, Japan
launched a highly successful thorium-fueled MSR experiment that created a prototype
reactor that could generate electricity at 2.85 cents per kilowatt-hour. Applied with cost
reductions due to learning ratio, this is well within Universal Energys target price for
electricity generation.

Other companies, such as ThorCon power, TransAtomic Energy and LightBridge
Energy are working to eventually create viable MSR/LFTR prototypes for larger-scale
deployment. As nuclear energy regulations are geared for uranium-235, public agencies
must adopt forward-thinking regulatory approaches to help pave the road for their
future success as investment in LFTRs abroad is accelerating.

This investment in manufacturing expertise and solving the last remaining engineering
challenges to charting the most efficient path for LFTR deployment is essential, as we
can't afford to depend on foreign research and development to keep us on the cutting

Its well within our capabilities to address these challenges, as weve done so in other
industries already. And if we were to truly invest in these technologies and work to
shift social focus toward their benefits, we would have a clean, sustainable, affordable
and rapidly deployable means of nuclear energy.

But by themselves, advanced reactor technologies are not alone sufficient. The idea isnt
to simply generate enough energy to power civilization, but rather to generate levels of
energy many times beyond that for only this can enable us to end resource scarcity in
totality. Doing so requires a technology that can not only generate lots of energy, but
can also do so strategically and transmit it efficiently. That goal is met through solar
power, and we'll dive next into how.

When the sun is shining I can do anything; no mountain is too high, no trouble too difficult to

- Wilma Rudolph

Chapter Three: Solar Infrastructure

Solar power is the second technology deployed by Universal Energy, except through a
decidedly different approach than we take today. In doing so, this allows us to
maximize both solars utility and the scale on which its deployed. For advocates of
renewables, the benefits to solar power need no introduction: more solar energy reaches
Earths surface in one hour than all of humanity uses in an entire year, solar panels
require no moving parts and advances in manufacturing now enable us to build
environmentally friendly panels at record-low costs.

Yet at the same time, solar power has drawbacks. There are reasons why its only used
for less than 1% of energy production nationwide, and while the usual suspects in dirty
energy lobbies certainly play a role in this, there are legitimate obstacles to
implementing solar power over a large scale. As Universal Energy is designed to bypass
these obstacles from the ground-up, lets take a minute to review these challenges and
why theyre so important:

Location and transmission. Electricity, like sound, weakens over distance due to
resistance in transmission mediums. As a general rule, the longer electricity must travel,
the weaker it becomes upon arrival to its final destination. For example, there is
sufficient open space in the American southwest to place enough solar panels to power
the planet several times over. Yet delivering electricity from that point to others over
long distances is difficult because power lines have too much resistance within them to
transmit electricity efficiently over thousands of miles. Power lines also cost millions of
dollars to construct, making it expensive to transmit solar-generated electricity even if
more efficient methods emerged.

Deployment expense and physical space. Beyond the cost to transmit electricity, solar
power is still relatively expensive to implement even in locations where electricity can
be consumed at the point of generation. While eco-conscious households and businesses
freely possess the means to do so by choice, the opposite is generally true when
governments are concerned, as devoting large sums of tax dollars to solar projects is
frequently met with less than enthusiastic support from the public. As the value of solar
power is proportional to its extent of implementation, this limits its utility considerably
even as the price of solar panels continue to fall.

This problem is also made trickier by the issue of physical space: even if the political
support and finances existed to construct solar power plants on large scales, the
question of where to build them remains. Solar power needs a relatively large area to
work, a requirement that becomes significantly more expensive as population density
increases and roofs + tops of buildings arent going to cut it to generate power for
millions of people. Mindful of this, its unlikely that swaths of prime real estate could be
purchased for the purpose of solar power, making widespread adoption harder.

Lack of standardization and prefabrication. As with many fledgling industries, our

current approach to solar power reflects less than ideal degrees of standardization.
Today, solar panels are designed to be installed on varied locations: roofs of buildings,
soil, rock, motorized platforms, etc. But by diversifying and decentralizing approaches
to solar power, it creates a hodgepodge of options of uncertain effectiveness, as
opposed to implementing solar power through superior, modular and proven methods
that can be standardized. In effect, this works to hinder prefabricated manufacturing of
turnkey solar systems on a large scale, which increases end unit cost and limits solar
powers overall usability.

These problems have slowed down the adoption of solar power nationwide, and while
advances in research and manufacturing have mitigated their impact to various
degrees, they still remain obstacles. Yet these problems are a reflection of our current
approach to solar power not the concept as a whole, which is where solar
infrastructure comes in.

Solar infrastructure is an approach where solar panels are installed to structures of

large size, the bridges, stadiums and highways of our society. As these structures
already exist and most are publicly owned, it gives us an excellent location to install
millions of solar panels in densely populated areas. Critically, this allows us to remove
the need to transmit electricity over long distances as well as buy expensive land to
install solar panels.

This idea of solar infrastructure can work over effectively anything thats large enough,
but Universal Energy identifies two options with the most potential above all others:
the National Aqueduct and road networks close to cities. As the National Aqueduct is
detailed in chapters five and six, well devote this chapter to exploring how road
networks can play a major part in revolutionizing our approach to solar energy


A solar road is a road surface that is not poured, like asphalt or concrete, but is rather
constructed with modularly installed solar panels that vehicles drive on. The idea was
first conceived and patented by a U.S. company named Solar Roadways with the goal of
using solar technology to solve todays energy-driven social problems. Since then, the
idea has been taken a slightly different direction and released as a product called
Wattway by Colas, a French road construction company.

Both companies have fully functional prototypes ready for production that are already
being deployed today. As of 2016, the Missouri Department of Transportation has
bought panels from Solar Roadways to deploy as a pilot project, and the French
government is planning to install 1,000 kilometers of Wattway solar road surfaces over
the next five years. Both products take different approaches to address the
complications surrounding solar power today, and also the constant (and expensive)
decay facing existing roads.

In a nutshell, heres how they work:

Sandwiched between a base layer and a reinforced surface of extremely strong

composite, a solar road panel generates electricity from sunlight while also providing a
driving surface for vehicles. However, in doing so it also offers features and
improvements over both asphalt road surfaces and current solar technology.

The image below shows a concept image of Solar Roadways. Note that in this concept
(and in real-life prototypes) the LED lights provide road markings, not paint.

Although they are still in prototype stages, both products can be factory-prefabricated
with recycled materials sourced domestically (no rare earth metals needed), and are
intended to be installed in a modular capacity. One simply fastens a panel unit into a
road surface that has been modified to accept them and connects it to the local electric
grid. Once installed, each panel independently generates electricity, and both Solar
Roadways and Wattway have an estimated lifetime of 20 years.

Real-life image of Wattway by Colas. Panels can easily withstand weight of commercial

Theyre perfectly safe to drive on, as recent advances in chemical engineering now
provide the ability to build composite surfaces that are stronger than steel allowing
the driving surface of solar road panels to be substantially stronger than traditional
roads. Additionally, the driving surface can be textured to exceed the abrasiveness of
asphalt so tires can grip the road surface in all weather conditions.

If youre wondering how effective this is in real terms: modern roads are tested with
machines that analyze their traction and strength. Most roads pass the test. In the case
of Solar Roadways, the prototype broke the machine. To explain how, here is an
explanation from Solar Roadways' chief inventor, Scott Brusaw:

"We sent samples of textured glass to a university civil engineering lab for traction testing. We
started off being able to stop a car going 40 mph on a wet surface in the required distance. We
designed a more and more aggressive surface pattern until we got a call from the lab one day:
we'd torn the boot off of the British Pendulum Testing apparatus. We backed off a little and
ended up with a texture that can stop a vehicle going 80 mph in the required distance."

In the case of Wattway by Colas, several tests have shown the composite panel covering
is strong enough to withstand one million truck tires driving over them.

The Solar Roadways prototype comes

with integral conduit channels for utility
lines and water runoff management.
Additionally, Solar Roadways and
Wattway come with internal, grid-
connected heating elements that can
melt snow 24/7, avoiding the need for
expensive snowplow operations. It's
worth noting, however, that both Solar
Roadways and Wattway have been
tested with snowplows and have shown
no adverse effects to the panel surface.

Left: conduit channels can run water, internet, gas, telecommunications and allow for
instant maintenance without digging up existing roads. Right: internal heating elements
melt snow off panels 24/7 and remove need for plowing.

The extreme strength of the composite surface of both Solar Roadways and Wattway
allow the panels to support tens of thousands of pounds. Existing tests have shown that
Solar Roadways can support up to 250,000 lbs. more than three times the legal weight
limit for a semi-truck.

Both solar road prototypes come embedded with LED lights that remove the need for
traditional road painting. These lights can display sophisticated messages/warnings and
road lines that can change instantly upon reprogramming from a municipal authority.
Sensors embedded within the panels can detect the presence of accidents, fallen trees or
animals crossing up ahead, notifying
motorists accordingly. In the images
to the left, only the bottom two are
concept, the top four are real-world.

Just as importantly, solar road panels

install rapidly. Solar Roadways
version is fastened down to concrete
surfaces designed to accept them,
whereas Wattway panels are glued
down to existing road surfaces. Once
automation and prefabrication of both
prototypes begins earnestly, this
allows us to install road surfaces in a
matter of hours to days, as opposed to
weeks and months with asphalt.

Left: Solar Roadways install rapidly via fastening to a concrete surface designed to
accept them.
Right: Wattway panels glue securely to existing road surfaces.

Regardless of what prototype provides more utility in a given circumstance (something
we'll go over shortly), solar road panels allow us to solve a lot of problems. But before
we get into how they can do so, its worth noting that a few criticisms came up in
response to their public debut. While some skeptics broadcast unfounded cynicism,
other criticisms bring up valid questions that warrant a second look.

Critics have claimed that roads are a poor location to install solar panels, that this
approach is prohibitively expensive, that traffic will obstruct sunlight and eventually
destroy the panels themselves, and that in general, this idea makes little sense.

Anticipating this, the inventors of Solar Roadways went through extensive efforts to
offer public-facing data and statistics to preemptively address these criticisms (as
did Wattway by Colas), but a few skeptics jumped the gun and reported their critiques
without all facts in place.

Id like to take a second to draw attention to these criticisms and explain why theyre
unfounded, but before I do so its also important to keep in mind that these models are
debut prototypes. Their ability to advance our way of life is measured on a future scale
the first computers and smartphones were hardly the specimens of their future
iterations. Solar road technology is no different; if we judged commercial air travel
based on the performance of the Wright Brothers first airplane prototype, wed still be
on horseback.

With that said, well review some of the biggest criticisms of this approach to solar
energy and address them accordingly:

1. Roads are poor locations to install solar panels, which are better suited for
remote land, buildings or even to the side of or above roads. Remote land is just
that: remote, requiring expensive power lines that are susceptible to transmission
loss over distance problems solar roads wouldnt have if they were built close to
where their electricity was consumed. Roofs of buildings are limited in surface area
and require independent investment from the building owner to install solar panels.
That means if you have 10,000 buildings, to cover them all with solar panels you'd
need to individually convince 10,000 building owners to buy and install them out of
pocket. Across the nation, this would be equivalent to making cat-herding an
Olympic sport.

As public roads are owned by the government, they can be a single-buyer for large-
scale solar surface deployment, and government is the only entity with both the

authority and finances to make infrastructure investments on this scale. Compared to
convincing individual property owners, it's a no-brainer.

Deploying solar on the sides of roads is a good idea as well (which well get into with
the National Aqueduct), but because solar roads can offset the high costs of road
construction and maintenance in addition to other benefits and cost reductions
elsewhere (automatic snow melting, LEDs in lieu of road painting, etc.), these are the
key components that make solar roads affordable and what makes road surfaces one
of the most attractive locations to install solar panels we have available.

2. Solar road surfaces will cost trillions of dollars. For 100% nationwide
implementation, that is true. And thats a bargain, for we already spend hundreds of
billions per year at all levels of government on our roads. As roads are state-owned,
solar roads can be installed by public works departments instead of asphalt roads
that dont generate revenue nor last as long removing the need to buy land to
generate solar power. This is money that would have to be spent anyways on new and
existing roads.

Maintenance and productivity loss due to delays would also be minimized. Asphalt
road work takes months and causes traffic delays that cost our economy an estimated
$124 billion in annual productivity loss. That's roughly a trillion dollars every eight
years. On top of that, the current price tag to repair our decaying infrastructure by
2020 is estimated to be $3.6 trillion. With solar road surfaces, panel installation and
replacement takes hours. Both Solar Roadways and Wattway are estimated to last for
20 years, and can generate revenue through multiple sources.

So yes, solar road surfaces might cost trillions of dollars if we deployed them
nationwide, and so will our current roads in the future. Solar roads are just a
superior product at a greater value.

3. Traffic will obstruct sunlight from reaching the panels and will ultimately
destroy the panels and their auxiliary features over time. Just as a hypothetical
parade of ants walking on your back wouldnt prevent you from getting sunburned
if lying outside, traffic can only minimally block sunlight from solar road panels. As
you can see from the following image, even at rush-hour traffic, the road surface is
largely unobstructed:

Because traffic is usually low when the sun is strongest, rush hour conditions are less
likely and instead would more commonly resemble the bottom half of the image
above, with low vehicle coverage. As the vehicles themselves are constantly moving,
what panels they cover would only be blocked from sunlight for a second, and only a
few times a minute.

As to road traffic harming the panels, its difficult to emphasize just how strong
composite surfaces can be. They are far harder and stronger than asphalt (which is
itself a liquid that is poured over gravel). On the Mohs hardness scale (which goes
from 0-10, with diamond hardest at 10), asphalt ranks at 1.3. Plate glass (the kind in
your windows) has a hardness of 6. Tempered glass, which solar road panels are
made from, can reach 7 and is much stronger strong enough to withstand a .50
caliber bullet.

Solar Roadways' current glass prototype passed two 3D Finite Element Analysis tests
showing that they could withstand both compression and sheer forces up to 250,000
lbs. The maximum legal weight of a semi-truck is 80,000 lbs. The textured glass
surface is designed to mitigate cyclic pressures from heavy vehicles (applying weight
on a small area at potentially destructive resonances), and the way road panels are
fastened into place is designed to absorb shocks. In short, theyre far stronger than
both asphalt and concrete and are tested to support a weight nearly twice that of
an Abrams tank. Thats plenty strong enough for normal traffic. The panels would be
just fine.

With these criticisms addressed, well continue from here by reviewing the benefits of
solar road surfaces in greater detail.


Thanks in large part to social infrastructure and public works projects after the Second
World War (namely the Eisenhower highway system), the United States has one of the
largest and most comprehensive road systems on the planet. Americas roads cover tens
of thousands of square miles of surface area and theyre cleared at the sides to avoid
obstructions from above. This makes roads uniquely suited for solar panels because
they can simultaneously solve the three main complications with solar power:
transmission, installation location and standardization.

Concerning location and transmission, consider the map above. Without exception,
every major population center in our country is served by a section of these roads,
allowing us to place solar panels close to and within any municipality. This removes the
problem of transmission expense and resistance over distance, as electricity can be
consumed at the points of generation.

This also removes the problem of installation location. Weve discussed how it would
be prohibitively expensive to purchase land for solar power within areas of high
population density. Yet solar roads can be implemented without having to buy land as
nearly every road in the nation is publicly owned allowing any given road authority
to implement them on their own.

Regarding standardization, as roads are ideal locations to install solar panels they
reduce the costs and complications surrounding the various surfaces solar panels are
currently installed on. Roads are generally flat and go on for thousands of miles, and at
least within the United States have universal 12-foot lane widths. This allows solar road
panels to be built to a single standard removing the need for ad-hoc, customized
implementation (minus the panels we install on the National Aqueduct).

Solar road panels also provide

a standardized method for
providing road traction as
well. The image to the left
shows the raised, abrasive
driving surface of Solar
Roadways that is comprised of
angular hexagons that also
double as prisms. These prisms feed sunlight directly into the solar panels from nearly
any angle, which not only increases efficiency in northern climates but also emits light
from the LEDs embedded within the panels in multiple different directions increasing
visibility and overall effectiveness.



As touched on previously, road work is highly expensive today. One of the first reasons
is because asphalt is an oil-based product, and as oil eventually gets more expensive so
does asphalt and the costs of anything to do with it: paving, sealing, maintenance, etc.
Although costs vary by region and road type, ballpark figures are sobering: in 2001, a
ton of liquid asphalt could be bought for around $140. Today it hovers around $500,
although when oil prices were spiking in 2012 it was closer to $600. A 200%-250% price
increase adds up quickly when paving thousands of square miles. But when I say
highly expensive to the extent that road costs can offset the costs of solar roads, it's
still an understatement. Let me explain:

Road construction costs are often broken down into something called a lane
mile, which is a mile-long stretch of one lane of a road. So if it costs $1 million per
lane-mile to pave a road, a 4-lane road would cost $4 million to pave. According to
Texas A&M University, it can cost between $2 million and $10 million per lane-mile to
add lanes to an existing freeway, depending on the location and nature of the road
(urban road work is usually more expensive). For normal surface streets, the cost can

range from $750,000-$3.5 million per lane-mile, according to the Arkansas Department
of Transportation.

And thats just to widen a road the costs of new construction are a lot higher.

Texas A&M University assesses that the cost of constructing a new freeway can range
between $5 and $20 million per lane-mile. According to Pulitzer prize-winning
Politifact, citing data from the Rails to Trails Conservancy, new rural road construction
ranges from $3.1 to $9.1 million per lane-mile. Digging deeper into data compiled by
Rails to Trails, their 2008 Active Transportation for America report estimated that a one
mile stretch of standard 4-lane road costs $50 million to construct, which would come to
$12.5 million per lane-mile. And in their document cited by Politifact, the Rails to Trails
conservancy estimates that road construction costs can rise even higher. The document

The cost to construct one lane-mile of a typical 4-lane divided highway can range from $3.1
million to $9.1 million per lane-mile in rural areas depending on terrain type and $4.9 million to
$19.5 million in urban areas depending on population size. However, in urban areas restrictions
(high cost of additional right-of-way, major utility relocation, high volume traffic control,
evening work restrictions, etc.) may increase the cost per lane-mile. If restrictions exist, the cost
to construct one lane-mile of a 4-lane divided highway can range from $16.8 million to $74.7
million. The cost of $74.7 million per- lane-mile in areas of severe restrictions may not represent
the maximum cost per-lane- mile and should be used as general guideline only. Individual
projects may include extreme conditions warranting a much higher cost.

That's between $67 million to as high as $300 million for a single mile of 4-lane highway.
It's worth noting that Politifact found these figures to be in line with conclusions made
by the other independent sources they queried.

While individual road costs vary due to lots of factors: climate, latitude, slope,
population density, etc., once we start talking about hundreds or even thousands of
miles of road surface, the costs of asphalt road paving quickly turn stratospheric. As 16-
26% of America's road surface is in poor condition, combined with all other cost
externalities it's plain as day that the immense (and oft-ignored) cost of asphalt road
paving sucks trillions of dollars from our economy over time. So when skeptics of solar
roads cite cost as a criticism, they don't incorporate the thirteen-figure price tag of the
road surfaces solar road panels would be replacing.

So how superior are solar roads ultimately in terms of value? To figure that out, let's
make some basic assessments about cost and electricity output. We'll need to do some
math to come to the most accurate assessments, which is included in this writings
Appendix for those inclined to look it over. Based on the logic included in that material,
we'll assess the following cost figures going forward:

We'll assess that Solar Roadway panels generate an average of 22.2 kilowatt-hours of
energy every year per square foot, at a cost of $114 per square foot.

We'll assess that Wattway by Colas panels generate and average of 19.22 kilowatt-hours
of energy every year per square foot, at a cost of $54 per square foot.

With these figures established, let's now compare this to asphalt roads:

Road lanes in the United States are universally standardized at 12 feet, thus one lane-
mile of road surface (12' x 5,280') would come to 63,360 square feet (5,885.3 square

For Solar Roadways, that comes to roughly $7.2 million per lane-mile.
For Wattway by Colas, that comes to $3.4 million per lane-mile.

That is firmly in the lower end of new road construction across the board and is
significantly less expensive than road construction/maintenance in cities. And keeping
in mind that as solar road surfaces are installed rapidly, there are reduced cost
externalities like road delays/traffic congestion that ultimately add to the total expense.
Even if the cost of solar roads were doubled even if they were tripled they're still on
the lower end of urban/environmentally sensitive road construction.

And the thing about that asphalt road surface? It doesn't generate any electricity. Using
the figures above, one lane-mile of solar road surface would generate 1.4 million
kilowatt-hours if Solar Roadways and 1.2 million kilowatt hours if Wattway. Applied
across a one mile stretch of four-lane highway, that respectively comes to 5.6 and 4.9
million kilowatt-hours generated over a calendar year. As the average home consumes
roughly 11,000 kilowatt-hours of electricity per year, that's enough to power 511 homes
with Solar Roadways and 422 homes with Wattway by Colas. I'd like to see traditional
asphalt pull that off.


Thousands of miles of American roadways are falling apart nationwide, and as

discussed the present cost to repair them is well into the trillions. Yet instead of
continuing to use asphalt, replacing decaying roads with solar road surfaces allows us
to revitalize our national road infrastructure while simultaneously providing an
immense source of power. But we can take it even further than that as our road surfaces
themselves haven't really changed since asphalt paving was first domestically
introduced in the 1870's. Solar road surfaces make possible a completely different
approach to our road networks, one more well-suited for the 21st century and beyond.
A few examples include:

Solar Roads Reduce the Scope of Road Work. We've thus far seen that solar road
panels install quickly. With Solar Roadways, installing them is performed simply by
fastening them down to a concrete surface designed to accept them. With Wattway by
Colas, panel sheets glue over existing road surfaces. While these installation methods
still reflect prototype designs and future advances will undoubtedly improve the
efficiency of installation, they're still dramatically faster than asphalt paving today. It
also allows for panels to be refurbished and/or recycled a marked improvement over
asphalt, which must be destructively recycled. This increases the longevity of roads as
time goes on, as they can be upgraded and improved indefinitely. And as solar roads
use solid panels they come with an added bonus: no potholes to fill.

In the case of Solar Roadways, the composite surface also has a self-cleaning function
that works over two stages. First: ultraviolet sunlight breaks down organic dirt and
rubber on the glass (even on overcast days). Second: the glass is chemically treated to
make it hydrophilic, as opposed to most glass, which is hydrophobic. Hydrophilic glass
allows water to spread evenly over its surface when it rains, which washes away dirt,
rubber and grime. Even in the event of drought or a lack of rain, municipal road
cleaning services would simply need to wash the surface off, rather than resort to
scrubbing it clean. Although hydrophilic surfaces tend to be slippery, the abrasive
surface of the textured glass alleviates this problem.

Solar roads provide smart traffic management. Since the rise of the automobile, traffic
has become an increasing problem in nearly all metropolitan areas. While not solving
the traffic problem directly, solar road panels do offer the ability to provide real-time
data to traffic management services that can be used to relieve congestion and more
accurately predict traffic patterns. And as solar road panels can have sensors that

measure weight, speed and direction of vehicles, they can also monitor roads to
instantly alert emergency services in the event of accidents or other hazards, such as
falling trees, animal collisions, etc.

Solar Roadways' conduit channels can effectively manage and recycle water runoff. It
has been found after years of research that rainfall runoff from roads is a leading cause
of waterway pollution nationwide, as spilled oil, fuel, chemicals and other litter
eventually make their way into rivers. Solar Roadways can solve this problem by
featuring water pipes within conduit channels that capture water runoff and pump it to
treatment facilities.

Utility lines and smart grids. Weve so far paid a good bit of attention to Solar
Roadways conduit channels, especially as to how they could revolutionize the way we
run utility lines and power our electric grid as a whole. Thats because running and
repairing power/utility lines is very expensive, especially since they usually exist above
or underneath existing roads. In the latter case, maintenance or installation of utility
lines often leads to prolonged periods of traffic congestion due to road closures and

Yet with Solar Roadways' conduit channels, if a utility line needs to be installed or
replaced, it is simply a matter of opening the channel, performing the necessary tasks
and closing the channel again. This would dramatically reduce the time and expense of
utility line installation and would provide an additional revenue source as companies

would undoubtedly pay a premium to have a convenient and rapidly accessible
platform to run their utility services to customers.

This is also true in the case of wireless internet and cellphone reception. Cellphone and
mobile internet service is currently provided through towers, which send and receive
signals via radio transmission. Radio, however, is frequently spotty and is often
obstructed by buildings and terrain features. Solar Roadways could instead have mobile
service embedded within the panels themselves, allowing service to be provided in
tunnels, remote areas, mountains, etc. To say this also presents a benefit to self-driving
cars as another means of navigation is a significant understatement.

All of these features speak to the modular-driven reliability of deploying independently

operating solar road panels built to a single standard, which makes them uniquely
capable of providing a smart electric grid. Our national electric grid today is comprised
of 7,600+ decentralized power plants, owned by 3,200+ competing utility companies that
transmit electricity through 450,000+ miles of high voltage power lines, relay stations
and transformers. In short, a total mess.

Whenever a power line goes down (storms, trees, transformer overload, accidents), any
region that area serviced will go dark and will remain so until new power lines are
constructed or a workaround is built. Due to the difficulty of preventing these
disruptions, electrical outages leave an average of 500,000 Americans without power for
two hours or more on any given day. This is a costly problem. During a 2011 blackout
in San Diego, the National University System Institute for Policy Research concluded it
cost the city between $97-$118 million. That's for one blackout in one metropolitan area
of one state. Nationwide, the Lawrence Berkeley National Laboratory asses power
outages cost the U.S. economy some $80 billion each year.

Addressing these problems with current methods will be both challenging and
expensive, akin to making a computer built in the 1980s compete with todays latest
models. By some estimates, it will cost upwards of $1 trillion to improve just our electric
grids to meet demand by 2025.

Worse, in times of extreme heat or cold, this can present a risk to public safety. In the
past decade, power blackouts have been blamed for hundreds of deaths (141 in
California during a July heatwave in 2006, for example). This also makes us vulnerable
to potential terrorist attacks from strategically minded enemies, as targeting power
centers is a uniquely potent threat.

As modular yet interconnected systems, solar roads produce power independently from
each other. Thus, even if a section of panels was destroyed, the remainder would still
function as normal. This means that even in an environmental disaster or other
destructive event, large segments of the electrical grid (as well as utility services) would
remain intact, with shortened repair turnaround for those left otherwise. The end
result? Unprecedented reliability and security.

Standing alone, these benefits are transformational. Yet as applies to Universal Energy,
the most important feature of solar roads is their potential for energy generation.

So what's the best way to go about deployment? Rather than looking to replace all road
surfaces overnight (which would cost tens of trillions of dollars), Universal Energy
instead seeks to replace road surfaces where they are 1) most needed and 2) most viable
for solar power. Where might that be? Locations where roads see the most traffic, where
the most power is consumed, and where road work is the most expensive. In short:

Cities are by far the best locations to deploy solar roads because they present the perfect
mix of circumstances that maximize their effectiveness and affordability. Cities have the
highest population densities and proximity to industry, and thus consume the most
electricity. Just as importantly, road construction and maintenance is most expensive in
cities the same with the impact of cost externalities presented by construction-related
traffic and congestion. Every city in the country has a highway surrounding it, along
with road networks of central arteries, on/off ramps and city streets that have been
cleared of trees and obstructions to sunlight. Most cities are also laid out in square
grids, which would reduce sunlight obstruction from buildings during peak daylight
hours and could even reflect non-insignificant amounts of sunlight from buildings to
solar panels on road surfaces.

This allows solar roads to solve multiple problems at once, for not only do they simplify
road work in areas where it is most expensive, they can also generate millions of
kilowatts at the point of consumption, removing the need to transmit electricity over
long distances. Solar Roadways can be used where commercial traffic is heaviest or in
areas where running utility lines is critical, namely downtown districts that become
paralyzed by lengthy roadwork. Conversely, Wattway's promise is perhaps best
realized on external arteries and highway ramps where there are fewer complications.
And both options can be deployed in lieu of asphalt whenever a road needs to be

Here's an example of how this could work:

It's standard for a major metropolitan area

to have an annual transportation budget of
$1 billion, as 22% of Miami-Dade's $6.8
billion annual budget is devoted to roads,
rail and transport ($1.36 billion total). If
significant percentages of that city's road
budget went to the purchase and
installation of solar roads, a city could easily
afford enough solar road panels to generate
hundreds of millions of kilowatt-hours
annually under optimal conditions. That's
energy generated simply by investing in a
superior road surface that would otherwise
have to spent on asphalt roads roads that
lack all of the benefits listed above. As wear
and tear claims greater levels of asphalt
over time, they can be replaced by solar
roads that last longer and generate electricity. Year after year, this adds up to billions of
kilowatt-hours that are paid for simply by routine road maintenance.

This being the case, it marks a subtle yet significant shift in how we generate and
consume power as a nation. Over time, the addition of solar roads in cities passively
generates billions of kilowatts that are consumed at that same location. In effect, this
lowers the amount of electricity a city consumes from external power infrastructure, a
trend that increases indefinitely as routine road work continues into the future. As cities
continually reduce their external power consumption while existing power
infrastructure remains intact, it presents circumstances where a large increase in
production capacity can create a massive abundance of electricity. Which brings us back
to LFTRs.

Liquid Fluoride Thorium Reactors are the first technology of Universal Energy because
they're the most powerful and affordable means to generate electricity in the smallest
platform that fits the framework's requirements. Solar roads are the second technology
because they're passive electricity generators that can piggyback onto our existing road
surfaces and be paid for by our already-present road budgets, furthermore allowing us
to build next-generation road networks that truly function as a smart electric grid and
delivery system for municipal utilities. As solar road technology becomes less expensive

over time and accelerates in scale this can eventually zero out the electricity a city
consumes from external sources and may even one day transform them into power

As LFTRs continue to generate electricity alongside the electricity we can generate from
desalinated water (more on that in Chapter 6), this supercharges electricity generation
nationwide. And as solar roads shrink the demand cities place on electric grids, that
increased electricity generation will have no choice but to create an immense abundance
of supply, which is the crux of how Universal Energy works. Maximize supply while
dramatically reducing load. This will cause the price of electricity to plummet, which
ultimately provides the base of the framework to produce the inexpensive electricity
necessary to cost-effectively synthesize civilizations essential resources.

This will only accelerate through greater adoption of LFTRs and solar road technology,
for any expansion in production beyond that is subject to a multiplier effect, which if
deployed over a large enough scale is the factor that prevents increases in energy
consumption from catching up with the rate of energy generation. That is the
breakthrough that gives us the ability to accomplish goals that were previously
impossible due to energy costs.

While multiple problems could be solved as a result, the first and most important
among them involves seawater desalination and hydrogen production, which is where
this discussion will go from here.

I believe that water will one day be employed as fuel, that hydrogen and oxygen which
constitute it, used singly or together, will furnish an inexhaustible source of heat and light.

- Jules Verne

Chapter Four: Water and Hydrogen

Of the resources facing impending scarcity, water and fuel are the most serious as they
are integral to growing, processing and delivering food. Consequently, these are the
resources that stand to drag us into conflict when they run low, which is unsurprisingly
why we've been fighting over them throughout most of history. Universal Energy is the
solution to this problem because it generates enough power to synthesize these
resources in that order: water and fuel, and as a result, food enabling us thereafter to
synthetically manufacture building materials. How so? It all starts with a trip to the

To begin, its important to clarify that we arent facing a water crisis in and of itself;
were facing a fresh water crisis. 71% of the planets surface is covered by water, yet less
than 2% of that water is fresh (with 1.6% of that locked in polar ice). For our purposes,
the remaining 0.4% is the only percentage that has historically mattered. Thanks to
modern seawater desalination technologies, that is no longer true today.

The desalination of seawater is a well-proven concept. The same is true with extracting
hydrogen fuel from water via electrolysis (running an electric current through water to
chemically separate it into hydrogen and oxygen). Yet both processes require large
amounts of energy, which has traditionally made them expensive. The electricity
generated by LFTRs and solar roads removes energy limitations as a constraining
factor, allowing us to synthetically produce fresh water and hydrogen fuel from
seawater on a large scale. This begins with a system known as a Multi-Stage Flash
Distillation Chamber (MSFDC), shown by this diagram:

An MSFD facility features a series of interconnected chambers (referred to as stages)
that are set at varied temperatures and pressures relating to the boiling point of water.
Seawater is pumped in through one end and is heated to reach a certain temperature.
Once at the right temperature, the seawater is pumped through valves into subsequent
stages at different temperatures and pressures. As pressure and temperature are the
factors that determines when water boils, this process forces water to instantly flash
turn to steam, which is then collected via a condenser and turned into liquid in the form
of fresh water.

From there, the remaining hot brine is pumped back into the system to
counterflow with the influx of cold seawater to heat it up, recycling a majority of heat
energy in the process. What waste remains is essentially very salty water, which can be
evaporated to leave only salt.

MSFD is common today; 60% of all desalinated seawater in the world is produced
through this method in more than 18,000 facilities worldwide. However, MSFD requires
large amounts of energy to function, translating to higher operating expenses. This has
made MSFD implementation more difficult to justify at larger scales. Accordingly,
Universal Energy allows us to turn this from an expensive system to the exact opposite,
giving us the ability to produce unlimited amounts of fresh water as a function of

The use of the word unlimited is intentional in this context, and it is not hyperbole.
There is a vastness to the oceans that 71% of Earths surface does not give justice to, as
they dwarf terrestrial landmasses by a factor of nearly four. Only 0.4% of water on
Earth is both fresh and accessible, and while its functional scarcity grows by the day,
human civilization takes over a decade to consume that amount in entirety.

With that in mind, we would have to increase our water consumption by thousands of
percentages for large scale desalination efforts to even measurably impact sea levels
(especially since the water cycle would inevitably return desalinated water to the
ocean). And even if we could somehow do that, it would be to our benefit anyway since
sea levels are rising due to climate change.

Questions regarding the environmental impact of MSFD (both on local ecosystems and
the ocean as a whole) are of course valid. But any concerns arising from them tend to
center on inferior and unnecessary means of operation. From a conceptual standpoint,
MSFD doesnt do anything except place a pipe in the ocean and suck in seawater, and
rather than one large pipe that might risk sucking in marine life, a smaller series of
filtered pipes designed to minimize environmental impact can be used.

As these intake systems can operate twenty four hours a day, a large volume of water
can be produced through a slow yet steady flow, meaning it does not need to be strong
enough to interfere with the local ecosystem. The greatest environmental impact of
MSFD plants today usually involves the dumping of waste brine back into the ocean
with toxic chemicals used to pre-treat seawater for desalination steps that need not be
taken with modern facilities for two primary reasons:

1. Chemical pretreatment of seawater is not necessary in modern MSFD plants.
Older models have sometimes introduced chemicals to soften up water to make
it easier to heat, but this was done to offset energy costs, a factor that Universal
Energy effectively mitigates.

2. Currently, some multi-stage flash facilities pump waste brine back into the ocean,
which raises salinity levels of the local ecosystem and risks causing varying
degrees of environmental damage. This step is unnecessary with Universal
Energy, as we now have the excess energy to boil off waste brine and leave only
salt as a byproduct.

That latter point presents an important question, though: if we were to desalinate

seawater on a large scale, how do we deal with all of the leftover salt? One of two ways:
responsibly introduce it back into the ocean or sell it. Here's how that could work:

Let's say our implementation of Universal Energy desalinated a total of 500 billion
gallons of water annually. Each gallon of ocean seawater contains roughly 4.5 ounces of
salt. Therefore, 500 billion gallons of seawater would contain 2.25 trillion ounces of salt,
or 140.63 billion pounds. That's a lot of salt but our national salt consumption as a
nation is equally high.

According to the US Geological Survey, the United States consumed 69,500 thousand
metric tons of salt in 2015 for all purposes (roads, industry, food, etc.). At 69.5 million
metric tons, that translates to 153.2 billion pounds of salt. This means a 500-billion-
gallon annual desalination effort would yield around 91% of our annual salt
consumption. At an estimated price of $40-$50 per long tonne (2,204lbs), assuming $40
per tonne, 140.63 billion pounds would yield roughly $2.5 billion in profits from annual
salt sales.

At the same time, it's also notable that taking too much salt out of the ocean may harm
marine ecosystems as fresh water eventually returns to the ocean. If that possibility
proves true, this framework would suggest that we instead gradually release salt back
into the ocean in a responsible manner both mitigating environmental impact while
dealing with excess salt.

With these concerns addressed, MSFD technologies can be harnessed to produce

unlimited amounts of fresh water for any use, and we can use it both inexpensively and
in capacities that have low to negligible environmental impact. This can effectively end
water scarcity worldwide. And we can do the same for fuel.


Hydrogen is the most abundant element in the universe. Its light, clean and highly
combustible with an energy-per-mass ratio greater than any fossil energy source
known. This makes it a flexible and useful alternative fuel to petroleum if we go about
sourcing and storing it in the right way.

Hydrogen production is currently a $100+ billion industry, yet most methods to

produce hydrogen today involve extracting it from oil through high-temperature steam
reformation a process that is both environmentally destructive and will likely prove
unsustainable as oil becomes more expensive in the future.

Yet another option, electrolysis, becomes more attractive in a world with unlimited
cheap energy. Electrolysis is a process that introduces an electrolyte (which could just
be seawater) and an electric current into water to chemically separate it into oxygen and
hydrogen gas. Like multi-stage flash distillation, it is not a new concept. The process has
been in use since the 1700s to extract various substances, hydrogen among them.

However, extracting hydrogen through electrolysis requires high levels of energy to

break the molecular structure of water, a problem that has previously made commercial
production expensive. Universal Energy accordingly mitigates this cost factor, as it does
with all other systems within the
framework. As a result, hydrogen
through electrolysis would become
completely viable.

Conceptually, electrolysis isnt

particularly complex. You could set up a
simple facility in your garage if you
wanted to (just dont smoke!). But to
produce enough hydrogen for use as a
viable fuel on a nationwide or global
scale, production would need to happen
in an industrial setting. Through
Universal Energy, this can occur on an effectively unlimited scale, especially since
desalination and hydrogen production can be paired to work alongside LFTRs, a
concept well touch upon shortly.

Once produced, hydrogen can be harnessed to power and/or make possible an array of
systems and processes that well be familiarizing ourselves with throughout this

But even so, challenges to using hydrogen remain. Production is only one half of the
equation the other is how to contain and transport it, considerations of no small
significance. Because hydrogen is highly reactive, it bonds to other substances that
would contaminate its use as fuel. Due to its volatility, it requires storage in
containment tanks at high pressures.

While metal tanks work for use in a laboratory or industrial setting, the weight of these
tanks and the safety risks presented by the explosive nature of hydrogen have made
this approach sketchy for normal transportation. In addition to production costs, these
problems have hamstrung efforts to use hydrogen as a fuel supply outside of limited
areas. Thankfully, recent advancements have given us superior storage alternatives to
todays methods, with the three most promising discussed as follows:

Shrink-wrap it: A British company by the name of Cella Energy has invented a method
to encapsulate hydrogen compounds within tiny pellets that are 30 times smaller than a
grain of sand. These compounds are normally unstable and would degrade when
exposed to air, yet when encased within a polymer coating they are protected from
outside elements. These pellets are then aggregated and placed within larger pellets
that are roughly the size of a pencil eraser.

In a vehicles fuel system, the larger pellets break apart to release the smaller hydrogen-
containing pellets, which are small enough to be suspended in a liquid. While stable to
the touch at room temperature, they dissolve in solutions at inner-engine temperatures,

releasing hydrogen fuel. From there, the spent pellets are stored in a tank for recycling,
which can be a core function of the fueling process.

This image shows a two-tiered refueling concept, with one pipe pumping hydrogen
pellets into a system, and the second extracting waste pellets for reprocessing.
Currently, Cella Energy estimates that hydrogen fuel through this method could output
the energy equivalent of a gallon of gasoline for $1.50. Even though this is less than half
of the current price of gas, it is likely that this would reduce over time and make
hydrogen fuel less expensive by way of Universal Energy.

Synthetic oil: Universal Energys approach to solving resource scarcity is based in large
part on replacing oil as a fuel source, due both to its finite supply as a fossilized product
and its contributions to climate change. But oil has several important uses besides fuel:
its essential for making plastics and synthetic materials, and it's a critical ingredient to
numerous chemical engineering processes. We use oil for these applications because oil
is type of chemical known as a hydrocarbon, and hydrocarbons have extensive
versatility in both organic chemistry and for combustion inside engines. Oil is presently
the abundant hydrocarbon of our time, so as the shoe fits, its what we use. But that
doesnt have to be the case, especially as oil becomes economically scarce in the future.

At the core of it, all that comprises a hydrocarbon is the presence of hydrogen and
carbon bound together in a chemical compound. And with modern technology its
straightforward for us to synthesize hydrocarbons if we have an abundant supply of
hydrogen, which Universal Energy provides as a key resource. There are several
methods in chemistry that achieve this goal, and all are worth reviewing in detail if

youre inclined to learn more. Today, these hydrocarbons are usually sold as synthetic
fuels, but fuel is not necessarily the best application for synthetic oil.

With an abundant supply of hydrogen, we can make specialized synthetic oil for use in
advanced chemical engineering and material production. Instead of modifying oil
extracted from the ground to produce ideal chemicals, we can instead engineer a
synthetic oil's chemical composition from the ground up to solve specific challenges.
This becomes all the more true with genetic modification of hydrocarbon-producing
algae, which well look over in Chapter 7. That would give us an indefinite supply of
custom-tailored chemicals that we can use for effectively any purpose. With that in
place we can eventually create more sophisticated materials and chemical processes that
advance our way of life, and we can do so in abundance without reliance on traditional

Superior storage: storing and transporting hydrogen in compressed form requires

immense pressure, on the order of 482690 bar (7,000-10,000 PSI). Currently, this is only
possible through metal tanks which have limited utility due to increased bulk and
weight. Through Universal Energy, we have better materials.

Although well be reviewing materials further in Chapter Eight, one of the most
noteworthy materials in the context of hydrogen is graphene, which serves several
important roles in the Universal Energy framework. Conceptually, graphene is a one-
atom-thick sheet of carbon that is structured in a way that is both ultra-strong and
ultra-conductive. This allows graphene to both function as an extremely efficient battery
and also a structural material that is 200 times stronger than steel and 600% lighter. As
it can be made paper-thin while remaining bendable, graphene is well-suited to make
storage tanks for hydrogen in vehicles and other machinery. Just as importantly, these
storage mediums can be amorphous in shape, providing manufacturers with greater
flexibility in how theyre integrated within a fuel supply.

Fuel Cells: A hydrogen fuel cell is a means of producing electricity from a chemical
fuel, in this case, hydrogen, which in practice allows fuel cells to function as emission-
free batteries. Fuel cell technology has been around since the 1950s, and has steadily
grown since then into a billion-dollar market. As numerous proven fuel cell designs
have existed for years, Universal Energy doesnt overhaul fuel cells by themselves.
However, it does make them less expensive, easier to build and easier to expand into
varied sectors of our economy due both to an abundant supply of hydrogen and
advanced synthetic materials saying nothing of reduced energy costs.

Although hydrogen fuel cells are often looked to as a replacement for oil, Universal
Energy envisions their greatest utility to be in remote areas that are traditionally hostile
to power generation. There are several locations and circumstances where its not
feasible to rely on local power systems and where solar isn't possible (war/disaster
zones, remote research facilities, long-voyage ships, space travel, etc.), yet fuel cells can
provide effectively indefinite energy as long as there is an abundant supply of
hydrogen. With superior storage mediums, that supply can be provided and resupplied
far easier than heavy liquid fuel. Future advances in graphene battery technology can
complement this possibility (perhaps even leading to a graphene/fuel cell hybrid at
even higher levels of performance), allowing for robust energy storage even when far
away from civilizations amenities.

Hydrogen fuel cells also work well for powering vehicles, which alongside Cella
Energys pellet system is a point warranting of special mention. Universal Energy
foresees electric power to be the future of personal vehicle transportation, due in large
part to graphenes ability to double both as a structural material and as a means to store
electricity. Additionally, hydrogen fuel cells can be dangerous if violently ruptured in
an accident, so if they do power vehicles they must be properly secured.

That makes hydrogen fuel cells useful for larger commercial vehicles with more energy-
intensive tasks, especially those in remote areas. Smaller, personal vehicles would in
turn be better suited for straight electric, or electric with a smaller array of hydrogen
fuel cells to recharge batteries as a backup power source. Ultimately, it will be up to
industry and consumer choice to decide which option works better over time; Universal
Energy simply provides both as core products.

Combined, this approach makes it flexible to provide water, fuel and electricity at the
same time from the same resource. Yet that merely reflects only the products of this
approach it doesn't reflect its underlying strategic value. As weve discussed
previously, a key component of Universal Energy is its intention to deploy energy
technologies strategically so that they can work together as a team to become greater
than the sum of their parts. With all of the technologies discussed thus far in place it
allows us to do just that bringing us to a concept well refer to as Energy Plants.


Universal Energy's technologies are chosen because of their ability to generate a lot of
energy, but also because they can work together. This idea is commonly referred to
as cogeneration, but it's not really something that happens today. Our current power

infrastructure is just there: decentralized, ad-hoc entities that only generate electricity
without taking advantage of any of the excess energy produced by these facilities. Most
power plants, at least of the coal-fired variety, have an efficiency of around 33%. This
means that 67% of their generated energy is wasted as heat! Thats an enormous loss of
energy that could be used to power other systems.

Because of that, the Universal Energy framework is built upon this core
mindset: symbiotic operation is of critical importance.

This mindset calls for energy production systems to be built close to each other. And as
nearly all of them deal with electricity, water and heat, it becomes possible to use the
functions and waste energy of one technology to help power another. As a framework,
Universal Energy doesnt operate as isolated technologies, but rather as a series of
facilities that can be deployed together as a single unit. If these facilities are combined,
they become Energy Plants, where instead of just producing electricity they also
provide fresh water and hydrogen fuel at the same location. (Navigate here to see a
larger version).

Heres how this can work:

As far as power plants go, LFTRs get hot. The key phrase in Molten Salt Reactors
is molten, so by hot, I mean really, really hot. Heat is useful for several applications
besides generating power, and as the reactor's heat exchangers are far away from
anything radioactive, this heat can be harnessed to serve other functions.

The first thing we can do is integrate the heat exchangers of LFTRs with the seawater
intake of MSFD facilities. So instead of using electricity to heat seawater, the already-
present waste heat from LFTRs and the internal brine heater of the MSFD plant will
take care of it. What that means is the seawater requires little energy to flash turn to
steam saving a massive amount of energy in the process.

Doing this is also useful for extracting hydrogen, as there is already an ample supply of
heat, electricity and water at the facility. This energy can be used to process and/or
compress hydrogen into a usable fuel and also pump water to remote locations via the
National Aqueduct (next chapter). This presents several additional opportunities that
are not possible with our current power and energy infrastructure:

1. Constructing a single facility that can produce multiple types of energy and
resources at one location is significantly less expensive than if these facilities were
decentralized at separate locations.

2. Condensing multiple functions into a single facility avoids expenses of transmission

and transportation, increasing overall efficiency.

3. Symbiotic design helps establish ideal standards for implementation and operation.
This reduces costs and helps encourage greater adoption of the technologies behind
Universal Energy.

4. The water produced from Energy Plants can come out hot, an essential factor that
will become important within the next few chapters.

Taking the best aspects of the technologies behind Universal Energy and designing
them to work together has promising benefits, but ultimately they are simply
addendums to a far longer list. Because at the core, what we have with these systems
what we can have today is something that we have never before had in human
history: the ability to synthetically, sustainably and inexpensively produce as much
electricity, water and fuel as we could ever need by our own hand.

It is difficult to overstate the implications of reaching this goal, for it changes the very
foundations and constraints of our existence. It makes irrelevant so many deeply
entrenched social and economic problems that have contributed to the ugliness that has
plagued us since the beginning of time. We can now put all of that in the past, and
instead focus on building systems that solve core human problems on large scales. And
beyond energy, the next to consider are those facilitating auxiliary resource synthesis.

This began with water and fuel, but it can be extended further to agriculture, chemical
engineering, recycling processes and next-generation building materials. The next stage
toward getting there comes through something called the National Aqueduct, and well
devote the next chapter to reviewing how it works.

Water, the hub of life. Water is its mater and matrix, mother and medium.

Water is the most extraordinary substance. Practically all its properties are anomalous,
which enabled life to use it as building material for its machinery. Life is water dancing
to the tune of solids.

- Albert Szent-Gyorgyi

Chapter Five: The National Aqueduct

So far, the Universal Energy framework has given us three of the five critical resources:
electricity, water and fuel. But while electricity and fuel can be transported relatively
easily, water is a different story. We might be able to produce a lot of water at the
coasts, but how do we get it to every location that needs it inland?

Universal Energys proposed answer to that question comes through what it calls the
National Aqueduct. It is a nationwide array of modular, above-ground pipelines and
storage facilities intended to transport billions of gallons of water anywhere in the
country. It also has the secondary function of acting as a nationwide battery for
renewable energy, which is an important feature that well go over next chapter.

Thanks to three things weve been perfecting for the past 50 years: oil pipelines, high-
voltage power lines and interstate highway networks, not only do we have the free
space and capabilities to build this system, weve already built it for other substances
at higher stakes and at higher difficulties. To elaborate, consider a series of three
images. First, if you recall from Chapter Three, we see that our nation has a highway
system that connects every area of our country together:

Second, consider a map of nationwide power transmission lines:

Third, consider a map of nationwide fossil fuel pipelines and refinery networks:

From these maps, we can derive two important conclusions:

1. Highways and high voltage power lines give us plenty of free space to run water
pipelines. As we saw with solar roads, our road networks provide ample open space
to install solar panels while removing the requirement to buy land, for roads are
generally owned by public services that have exclusive authority to build on them.

Roads and highways also have clearance at each side and are generally flat and
straight a trait shared by high voltage power line networks. This gives us
thousands of miles of space to build a National Aqueduct. As this space has been
pre-cleared of potential obstructions beforehand, construction in these locations has
fewer obstacles than others.

Additionally, their close location to power systems (either solar roads, LFTRs, Energy
Plants or power lines) gives the National Aqueduct plenty of energy to power
sensors, pump stations, purification mechanisms and heating systems to keep water
hot. This energy can be used in conjunction with pipeline-mounted solar panels and
turbines that generate electricity, which well discuss in the Aqueducts battery
function next chapter.

2. Running water pipelines is feasible. We know that the National Aqueduct will
work as described because weve already built it for fossil fuels today. We have
thousands of miles of oil pipelines that work in the same way as water pipelines, and
oil pipes are also built as unique entities and to a far higher environmental standard
than with water.

Water pipelines can come factory prefabricated and be designed for rapid, modular
construction. Accordingly, environmental risks are similarly reduced as the only
substance a leaking water pipeline would spill is fresh water, presenting minimal
impact to the environment (if any). This would allow us to build water pipelines at a
lower price than with oil pipelines. We can also use the lessons weve learned with
oil pipelines to get a head start, as the supporting expertise already exists.

These factors make developing a National Aqueduct more straightforward, as much of

the work has already been done for us in terms of research and development,
engineering and methods of implementation. But to have it fit our needs in full, well
need to establish a few requirements for the National Aqueduct, as well as go over the
mindset it employs to accomplish its intended goals:

Efficiency, reliability and affordability: Humanity has huge fresh water requirements,
many trillions of gallons a year. While Universal Energy is capable of producing that
much water, delivering it with any effectiveness must be efficient and inexpensive. A
water delivery system must also be reliable, as any given industry or city cant depend
on a water source if its reliability is questionable. Therefore, guaranteed uptime of this
system is essential.

Scale of delivery: Whatever system we use to transport water must work over
thousands of miles, as water must be delivered from the coasts to areas deep inland.
And as we do not live on a flat landscape with consistently warm weather, this system
must also be deployable over varied terrain and climates especially cold climates.

Control of operation: A water delivery system must provide control mechanisms that
can be engaged locally, as a centralized control structure would be incapable of
efficiently managing the water requirements of every agricultural and population
center throughout the country. There must also be redundancy in transportation, for
example, allowing a city in the central United States to receive water from multiple
routes in case one becomes incapacitated by some unforeseen event (tornado,
earthquake, etc.). This calls for a smart system approach that would have
sophisticated functions to both monitor and manage how water is distributed from the
point of desalination to the final point of consumption.

Modular construction and ease of maintenance: Modularity is essential to ideal system

design, providing benefits and cost savings in terms of construction time,
standardization, reliability and maintenance. Any system to transport water would
have to meet these standards by allowing its components to be installed and/or replaced
rapidly by design.

The National Aqueduct would meet all of these requirements by comprising a smart
grid of above-ground pipelines, storage tanks and pump stations that would transport
desalinated fresh water from the coast to any area we wished, on the order of trillions of
gallons. These pipelines would feature interior turbines and solar panels that would
generate immense electricity, a portion of which would be used to keep the water hot
for thermoelectricity at night (hence the battery function well discuss shortly).

And if built, the National Aqueduct would enable us to provide for our national fresh
water needs without having to rely on rivers or natural water tables ever again.

Let me say that one more time:

The National Aqueduct would allow us to have unlimited fresh water. And it never
again needs to come from the ground, a lake or a river unless we wanted it to.

This would allow us to render moot much if not all of the drought impact weve been
experiencing as of late, and would also provide circumstances where natural water
sources would have time to replenish to the benefit of the environment as a whole.

This system conceptually consists of four primary components: production,

transportation, storage and control. Well go over each in that order:

Conceptual Overview of the National Aqueduct


Production is comprised of multi-stage flash distillation facilities (ideally as part of a

greater Energy Plant) as described last chapter.


The transportation component includes a series of water pipelines and pumping

stations contained in modular units of varying capacities. Instead of building single
water pipes as unique entities, we could instead install factory-prefabricated pipe
assemblies that are designed to couple together (think Legos).

The idea behind this approach is two-fold:

First, this would simplify construction of pipelines over long distances, as each pipe
assembly unit would be pre-constructed to a specific standard and designed for
straightforward, modular installation.

Second, should the need arise, it provides the ability to rapidly expand transportation
capability with minimal overhead and construction costs. These pipelines would be
insulated against environmental elements and could drain on demand, and could
additionally contain a series of sensors that relay relevant data to the systems control
component (described briefly below). As mentioned previously, these pipelines would
also feature solar panels and turbines, but for sake of explanation well set the
descriptions of those aside until the next chapter.

Of the sensors within the pipeline, they might report back the quality of water inside
each pipe and whether or not it includes the presence of any contaminant (biological or
otherwise). Conversely, pipelines could be outfitted with sensors that would alert if
they become compromised or were modified without authorization. Since each separate
pipe within each pipeline assembly could have its own sensors that connect
independently to a control network, water quality could be monitored and analyzed
instantly on both local and national levels.

As sensor technology has reached levels of sophistication where sensitivity in parts per
billion (PPB) is common, the returned data would be detailed and useful to improving
water management. This, among other conclusions from sensor data, could influence a
range of actions from the control component of this system to ensure maximum
performance, reliability and security.


Storage includes arrays of containment tanks that accumulate water for delivery, acting
as the supply reservoir for a region. Rather than transport water directly from
production to areas of consumption (as we largely do with electricity), this system
would instead use storage tanks as a buffering / staging system, delivering water from
there and replenishing from production facilities as necessary.

These arrays of storage tanks would contain millions of gallons of water and could be
installed throughout the country to provide water resources to every agricultural and
population center that we have, ideally with multiple storage centers redundantly
servicing multiple regions. In doing so, the storage component would provide a couple
of important functions:

1. Staging: its difficult to guess how much water a region might use with certainty,
as external factors, such as weather, time of year and state of economy all impact
how much water is consumed. This would rule out any system that delivers water
directly, as maintaining supply and pressure over thousands of miles with
inconsistent demand would be a nightmare.

However, used as a staging system, water storage tanks can contain enough water to
supply a region for a certain time period (say a week), which equips them to handle
unexpected spikes in demand. In turn, water would be supplied from production
facilities to maintain consistent levels.

This nods to the concept of constant resupply, which bears special mention in this
context. The National Aqueduct would be producing and pumping water 365 days a
year. This constant operation is key to meeting our immense water requirements.

For example: a common bathtub faucet has a flow rate of roughly 120 gallons/hour. If
that faucet was to never turn off, it would provide 2,880 gallons a day, 86,400 gallons
a month and 1.036 million gallons a year just from your bathtub faucet.

With a water velocity of just 15 miles an hour, a 12 pipe can have a flow rate of 150
gallons per second. That translates to 540,000 gallons an hour, 12.96 million gallons a
day, 363 million gallons a month and 3.35 billion gallons a year. And thats from one
12 pipe. Imagine what an array of nine pipes could do, and imagine that pipeline
array multiplied by hundreds. A constantly running pipeline network could easily
transport hundreds of billions of gallons.

2. Overflow / active water management. Nodding to the fact that water demand is
inconsistent in many areas of the country, there will be times where one region has
more water than it needs or needs more water than it has. This requires the system to
feature an overflow component.

By design, water storage arrays would only be filled to ~70-80%, which would give
them the ability to accept more water from other regions should supply exceed
capacity. Should a region consume more water than anticipated, overflow in one
region can be diverted to another on demand. By virtue of this component, the
National Aqueduct can maintain consistent uptime and high degrees of reliability.

3. Ultraviolet sterilization and filtration. Storing water for weeks on end can lead
to circumstances where microbial agents could conceivably be introduced to
contaminate the water. To address this, in addition to standard filtration mechanisms
the Aqueducts storage and transportation component could have a series of UV
lights that would shine through stored water. Ultraviolet light, especially in high
doses, is lethal to any microbial lifeform (bacterial or viral) with 100% reliability,
allowing for long-term water storage without fears of stagnation or outside


The control component is the brain and nervous system of the National Aqueduct,
acting as a decentralized method that allows any region to control the flow of water in
its jurisdiction. This would be accomplished through a series of manned stations and
control centers at national, regional, state and local levels, which would act similarly to
the management functions of any public utility.

In this case, the control component would monitor the system to ensure consistency and
stability, and act accordingly if a contaminant, outside tampering or mechanical failure
were detected. In the event of a problem, the control component could direct the smart
grid of transportation pipelines and pumping stations to take action. These actions
might range from disabling, draining and isolating a certain pipe from transportation
lines, to even bypassing a sector or storage array completely. In the event of a spike in
demand or an overabundance of water, as touched on previously, water could be
routed from one storage array to another as necessary.

Also useful: the sensor network within the National Aqueduct can provide data to
create models that would be greatly beneficial toward ideal operation, and could, for
example, predict demand over time. A defining feature of the National Aqueduct is that
once enough water has been produced and stored, the production component only
needs to resupply what has been consumed, allowing desalination facilities to produce
water only as needed. Accurate modeling allows water to be resupplied intelligently,

because we will know when water will be used at what levels based on time of year,
allowing us to address an anticipated shortage before it occurs.

Combined, the National Aqueduct is Universal Energys means to deliver water

anywhere in the country, allowing us to live side-by-side with nature without seeing
the destructive results of unsustainable water use and natural drought cycles. From
here, the secondary component of this system comes into play, and that is its ability to
store electricity generated by renewables on a large scale. This writing refers to that
feature as the worlds largest battery, and next chapter, well explore why.

A ten times increase in the weight-oriented density of batteries would enable so many other
moonshots, and we will start that moonshot if we can find a great idea.

- Astro Teller

Chapter Six: The Worlds Largest Battery

The National Aqueduct is designed to transport an unlimited amount of desalinated

water from the coasts to anywhere in the country. Powered by Universal Energy as it is
designed conceptually, it can deliver on that promise. But like many of the systems
included within this framework, the National Aqueduct is not a one-trick pony. It has a
second function, and that is to generate additional electricity and function as a battery.
Now it might sound strange at first to consider a nationwide array of water pipelines a
battery of any sort, but the reality is that modern technology gives us the ability to
perform that exact effect.

By taking advantage of three aspects of the transportation system: surface area for solar
power, water flow for hydroelectric power and water heat for thermoelectric power, we
have a platform in which to build external electricity generation and storage systems.
Best of all, like Energy Plants these systems can work together symbiotically, presenting
additional benefits.

To explain how, well go through each of these three aspects in order:

Surface area for solar panels. Universal Energy calls for water transportation pipelines
to be built at the side of roads or under currently existing long-distance power lines.
Thats because this land is usually state-owned and doesnt need to be purchased, and
also because its pre-cleared of obstructions. But as this benefits solar roads, it also can
do the same for water pipelines. If you recall from the last chapter, water pipelines are
designed to be installed in prefabricated arrays, which comprise a relatively flat surface.
So, what if we were to cover that space with solar panels? That result is the final
intended form of water transportation pipelines, as shown by the following image:

This stands to generate a massive amount of electricity across the thousands of miles
pipelines would travel, which alongside solar road networks can help reinforce a
redundant, smart electric grid.

How much electricity? Let's assume solar-enabled water transportation pipelines had a
surface width of 10 feet. Multiplied by a mile, that's 52,800 square feet. As this surface
wouldn't require a glass covering for vehicle traffic, the latest solar panels could be
used. A Sunpower E19 solar panel has a peak output of 320 watts and a surface area of
around 17.5 square feet, coming to roughly 18.3 watts per square foot. At peak output, a
mile of solar pipeline surface would generate 966,000 watts. At 5.5 peak sun hours per
day, that comes to 5,300 kilowatt-hours a day, or 1.94 million kilowatt-hours per year
enough to power 177 homes. Across thousands of miles of water transportation lines,
the solar panel functionality of water pipelines can power millions.

But that prompts a relevant question: why use solar roads at all if we can put solar
panels on water pipelines? Because solar roads function ideally in cities, whereas water
pipelines would function better in less-populated areas. A downtown metropolis isnt
going to have above-ground water pipelines at the side of roads, and most water pipes
in cities are below ground (or in the case of Solar Roadways, in future conduit
channels). On the other hand, solar roads are least justifiable from a price to energy
standpoint in rural areas, whereas the opposite is true with solar-enabled water
transportation pipelines.

Over time, this framework holds that solar roads and solar-enabled pipelines would be
adopted in entirety, but at the onset, that both technologies respectively shine where the
other does not is especially beneficial for our purposes. This is all the more true once
other power generation methods are considered.

Integration With Wind Turbines. A notably absent renewable technology from the
Universal Energy framework thus far has been wind. And to be fair, winds presence in
the framework isnt so much absent as it is fashionably late. Thats because like solar,
winds utility is limited considerably by distance and other environmental factors. The
capability of road panels to generate solar power made solar the natural candidate to
pair with LFTRs as road work in cities costs a kings ransom and can offset their
expense, and their ability to pair with the National Aqueduct is simply icing on top.

But that latter part also applies to wind, and it does so in a big way. Wind power boats
its greatest strengths in open, windy terrain, which effectively is the majority of our
highway networks nationwide. As Universal Energy calls for heated water to be
pumped alongside highways, underwriting a secondary electric grid that works in
tandem with solar roads, wind power can plug directly into this system on-demand.

On the wide, open expanses of rural states, this can generate a tremendous amount of
electricity that conveniently has a centralized location for both transportation and
storage. Additionally, wind used in this manner can also take advantage of the public
land benefit of highway deployment, sparing the need to buy new land which reduces
wind's operational costs while increasing its overall cost-effectiveness.

Internal hydroelectric power. With solar panels on top of water pipeline arrays, the
pipes themselves could be fitted with internal turbines to generate electricity from the
water flow. Hydroelectricity is highly effective as a power source, and by miniaturizing
turbines within prefabricated pipeline arrays nearly every aspect of the water
transportation process can be harnessed as power. This is already occurring today
through inventions made by a company named Lucid Energy in Oregon, which has
installed modular turbine assemblies within prefabricated water pipes to generate

Across pipeline arrays that travel for thousands of miles, water flow would generate
electricity 24/7, and would further electrify the entire Aqueduct. To what degree is
speculative, but the potential generating capacity would be massive, many times that of
the Hoover Dam, perhaps maybe even hundreds of times greater. But unlike other
hydroelectric power stations, this method is environmentally friendly and 100%
reliable, as opposed to man-made reservoirs like Lake Meade (that powers the Hoover
Dam), which today is rapidly depleting due to global water scarcity.

Hydrothermal power. As water has a high specific heat, once it gets hot, it stays hot for
a long time. At the scale of billions of gallons, water stays hot for an extremely long time
days, weeks even. One of the key functions of an Energy Plant is to keep water hot
when it comes from a multistage flash distillation facility, and in turn pump it hot.

But after its pumped through the National Aqueduct, water will eventually cool once it
reaches far-away destinations. To fix this, well need to rely on the energy generating
features mentioned previously. By using the excess energy generated by solar roads,
pipeline-mounted solar arrays, wind turbines and internal pipe turbines, we have the
energy necessary to keep water hot throughout the entire Aqueduct. This is beneficial
for three important reasons:

First: water will reach its destination hot, sparing the energy needed to heat it within
residential and commercial hot water heaters. By virtue of the Aqueducts control

component, not all water would need to be delivered to residences at high temperature,
but the fact remains that it can arrive hot at any given time. This gives municipal water
managers more flexibility in how they route water, as well as save lots of money on
heating costs.

Second: keeping water hot also prevents any chance of water freezing within the
pipelines during winter months. It also prevents snow from covering the pipeline-
mounted solar panels in winter, allowing for residual electricity generation year-round.

Third: if the entire Aqueduct were heated, this would store a tremendous amount of
energy that could be converted into electricity, acting functionally as a battery the
worlds largest by far. Lets say for example that we would store 500 billion gallons
throughout the National Aqueduct, and lets say we heat that water to 200 oF. Using the
worksheets provided by the helpful folks over at Engineering Toolbox, we'll conclude
that 1 gallon of water at 200 oF contains 1,660 BTU of energy. Across 500 billion gallons
that comes to 830 trillion BTU, or 875.7 billion megajoules. That's effectively a volcano.

Converted into electricity, that's 243 billion kilowatt-hours from heat energy alone.
Combined with the hydroelectric and solar functions of water pipeline arrays, it's easy
to see how generating trillions of kilowatt-hours over time via the National Aqueduct
becomes feasible. To put that in the proper scale, take another look at the nationwide
road map we saw earlier:

If each one of these lines represented arrays of water transportation pipelines with solar
functionality, hydroelectric functionality and hydrothermal functionality, wed be
generating a level of energy that words just dont give justice to. And when combined
with LFTRs and solar roads, the amount of energy we could be generating defies the
bounds of present-day imagination. As adoption of Universal Energy's technologies
increases from there, the multiplier effect accelerates until we're not just reaching the
energy targets needed to synthesize unlimited resources we're leaving it in the rear-
view mirror.

Because thats the point, really. Unlimited energy must truly mean unlimited. The
same with unlimited resources no matter how much energy or resources we
consume, the framework must always produce more at a rate faster than consumption.
That is what Universal Energy is designed to do because that is how we destroy
resource scarcity by actually destroying it, as if it were our eternal arch-
nemesis. Because it very much is.

With the National Aqueduct explained in full, we have now solidified the placement of
three arrows in our quiver: electricity, water and fuel, and we have them to an extent
where their indefinite abundance is at our command. Goodbye drought. Goodbye
dustbowls. Goodbye water-borne disease. Goodbye water scarcity. And from there,
goodbye famine.

Because as we can use unlimited electricity to produce unlimited water and unlimited
fuel, we can then use all three to produce unlimited food.

Clean water is a great example of something that depends on energy. And if you solve the water
problem, you solve the food problem.

- Richard Smalley

Chapter Seven: Everybody Eats

At its most basic form, life requires energy, water and food. Historically, weve
depended exclusively on nature to provide those first two resources so that we could
grow the third. This would no longer be true with Universal Energy.

Thats because an unlimited supply of electricity, water and fuel provides the means to
sustainably cultivate food in environments that we could command greater control
over. This gives us opportunities to revolutionize our industrial and agricultural
systems to grow food locally, food that can be healthier and of better quality than much
of what we produce today yet grown at greater efficiencies with reduced
environmental impact.

In his book The Vertical Farm, Dr. Dickson Despommier talks about how advances in
technology allow us to grow crops indoors, which is becoming an ever-greater necessity
as humanitys population rapidly expands. These systems can be built close to areas
where food is consumed, reducing obstacles to transportation and delivery, especially
within urban environments. Indoor farms can also be climate controlled and operate 24
hours a day, 365 days a year dramatically increasing output and efficiency compared
to traditional agricultural methods.

This becomes all the more possible with Universal Energy. As its fifth resource is
building materials (which well check out next chapter), we would have the resources to
build these systems on a far larger scale than we could hope to today. And as the
National Aqueduct would deliver as much water as wed want or need anywhere,
water can be sourced directly to their greatest area of consumption.

Today, agriculture consumes 70% of our national water usage, followed by another 22%
for industry. But unlike industrial use, which tends to be centered in more urban areas,
agricultural use is spread over thousands of miles. As more people move toward cities,
this requires an increasingly larger amount of food to be delivered from rural locations
to feed people, which in turn requires more land to grow crops. Universal Energy gives
us another option to reduce the load placed on our current agricultural systems, and
thats where indoor farming comes in.

The logic of indoor farming comes from a few angles. First and foremost being that it
solves a primary problem with population growth: the lack of available land to grow
crops. With indoor farms, as long as there is water, light and heat, the location and
outside environment doesnt matter. This allows food to be grown anywhere on the
planet at any time of year with benefits that dont exist outside, including increased
yield, efficiency, length of growing season and food security. To see how, start by
taking a look at this concept image:

In this concept, water from storage tanks is mixed with an organic fertilizer made from
excess plant matter grown within the warehouse itself. This water is then pumped
through the facility and dispersed over plots of crops that grow under high-intensity
lights. These crops grow on modular platforms that can be easily moved and the crops
growing on them can be manually pollinated as necessary.

What water isnt absorbed by crops drains into a collection mechanism in each floor,
which then sends the water to the bottom of the warehouse where it is filtered and sent
back into circulation. As we see today, the construction of large warehouses at
acceptable cost isnt uncommon, as Id venture to say few of us havent stepped into a

Wal-Mart, Target, Home Depot or other big-box retailer at least once in our lives. These
buildings are huge, enormous even, sometimes encompassing millions of square feet.
And while the effectiveness of this approach for retail sales might be debatable, their
structure presents promising opportunities for indoor farming.

From this example, we see that a

1,000 x 1,000 building has a growing
space of about 1 million square feet
(minus walkways). Yet at five floors,
that building now offers five million
square feet of growing space.
Extrapolating that figure to a group of
warehouses, say 20 for example, that
figure becomes 100 million square feet
of growing space. Thats roughly
2,500 acres (3.5 square miles).

As these warehouses would operate

24 hours a day, 365 days a year, their
output if grouped together could be
enough to provide food for any city
on the planet. Combined, these systems create a food cultivation mechanism that allows
food of effectively any kind to be grown locally. And even so, its noteworthy that this
food isnt just grown, its grown under ideal conditions.

For example, here are some of the more remarkable benefits of indoor farming:

Total control of environment and constant operation. Since weve started farming,
weve been beholden to a growing season. Indoor farming completely bypasses this
limitation. Moreover, indoor farms can be compartmentalized into sections where wed
have total control over temperature, humidity, strength of light and soil composition.
Technology allows us to make summer indoors during a blizzard and grow pineapples
in January. As indoor farms operate 24/7/365, they can reflect the ideal light cycle for
any plant grown. This would dramatically increase overall efficiency, as there would be
no seasonal slowdowns.

Technology-driven pest/contaminant prevention. Pests and weeds are a problem in

the open environment, problems weve tried to solve with herbicides and pesticides
that are of varying degrees of toxicity. Yet because indoor farms can maintain total

control of environment, the presence of weeds and pests can be managed without as
much reliance on chemicals. For example:

Positive pressure. The indoor farm can be pressurized higher than the outside
area, so that when a door opens air blows out of the building instead of outside
air blowing in. Alongside worker sterilization and air filtration mechanisms, this
would limit the presence of contaminants inside the farm.

Active anti-pest measures. In the event that a pest did get in, we could respond
more surgically or with organic approaches, such as ladybugs for mites/aphids or
the application of more benign pesticides.

Isolated sections. In what would also benefit food security as a whole, isolating
areas of the indoor farm would hinder the ability of a pest contamination to
spread from one area to another.

Waste management. Indoor farms can be designed to minimize use of artificial

fertilizers by using composting. Whenever a plant dies, sheds material, etc., that
material can be collected into a composting mechanism (shown earlier in the concept
image) that can be mixed with other organic fertilizers and pumped directly into the
water supply that is used to irrigate crops. As much of the worlds soil is facing varying
degrees of contamination (such as arsenic in rice and steroids in runoff water from feed
lots), this translates to healthier food.

Diversity of crops. As their components would provide an ideal growing environment

that is naturally pest-resistant, indoor farms can allow greater use of heirloom crops
that might not fare as well as a genetically modified variant outside. This can afford the
cultivation of a greater diversity of crops, as well as crops with greater nutritional
properties, expanding the organic and farm-to-table markets.

Local operation. Indoor farms can be built close to metropolitan areas so food is grown
close to the people that consume it. Food production in New York would be consumed
by New Yorkers, food production in California would be consumed by Californians.
This simplifies the delivery of food from production to market, saving resources and
allowing for fresher produce compared to produce sourced from distant locations. It
can also present major improvements to how we provide food aid, as global anti-famine
initiatives usually involve shipping food thats already grown. With indoor farms, the
system itself can comprise the aid, allowing stressed regions to produce food by their
own hand.

Food security. Local food production in isolated environments allows us to reduce
security risks to our food supply. In late 2011, a Listeria outbreak in cantaloupe killed
more than 20 people in the United States, and events like this repeat semi-frequently.
We face this issue because supermarkets across the nation are stocked with produce
that comes from different states, different countries, even different continents, and its
hard to keep track of where everything is coming from in real time. Thus when
something does happen, our food networks are thrown into chaos until investigators
can pinpoint the source of the contamination and isolate it. With food grown locally,
security issues (however rare) are automatically isolated since production environments
are sealed.

This also protects our food supply from pathogens, a serious threat since 90% of our
crops currently come from genetically modified seeds with often identical genomes,
meaning that any self-replicating pathogen that could infect one plant could inflect
swaths of them. By design, indoor farms exercise self-quarantine, which is likely the
best defense they could have.

Indoor farming can provide local and sustainable food production anywhere on the
planet, systematically avoiding many of the obstacles and threats that exist toward food
production today. But beyond indoor growth in warehouses, we can integrate indoor
farms directly within urban environments, not only to supplement food production but
also to serve as centerpieces for more advanced cities with next-generation


Since the end of World War II, American agriculture has increasingly represented a
centralized model where the majority of food was grown in one location (the
American Breadbasket) and shipped elsewhere for processing and distribution. This
has led to logistical challenges that indoor farms are designed to address, but providing
a framework for climate-controlled indoor farming is only the first step. The next is
urban vertical farming, the main focus of Dr. Despommiers book.

An urban vertical farm doesnt use a flat plot of land or a large warehouse, but instead
floors of city buildings that have been modified for farming. This approach has existed
in varied forms for millennia (the hanging gardens of Babylon perhaps most famous of
them), and was first proposed in the United States in the early 1900s. Today, urban
vertical farms are being constructed in London, Chicago, Milan and Newark, with
future farms planned in several other cities.

In the past, the feasibility of vertical farming has been limited by constraints inherent to
resource scarcity, namely the cost of energy and materials. And like the other systems
described in this writing, these complications would no longer be present with
Universal Energy. Removing those obstacles would reduce limitations to building large-
scale urban vertical farms, such as the following concepts:

At first glance, these systems might appear fantastically futuristic, but they are well
within the realm of feasibility if energy costs and limitations are removed from the
equation. Thats because from an architectural or engineering standpoint, they present
few challenges that have not been solved already in other industries with todays

Urban vertical farm in South Korea.

Urban vertical farm in Chicago.

Additionally, urban vertical farms have benefits that cannot be provided by an indoor
farm in warehouse form, making them especially attractive for cities with higher
population densities. These include:

Smarter food production. While indoor farms may grow more crops at the size of a
warehouse, urban vertical farms can still provide a big boost to food production. As
urban vertical farms would exist directly within city centers, it zeroes out the distance
between production and consumption. In doing this, vertical farms redefine the notion
of farm to table, as they would be literally within walking distance of the millions of
people they would provide food for.

Municipal water recycling. In addition to integral water management, vertical farms

could have features that would not necessarily be available outside of an urban
environment, such as the use of grey water. Today, it is standard for water treatment
systems to filter water based on what its purpose was. Grey water (water used for
washing clothes or bathing) is treated differently than water thats used for cleaning
dishes, and both are treated differently than water thats used to process bodily waste.

Through currently existing water filtration systems, this allows us to use grey water to
grow crops in vertical farms instead of fresh water. This would help aid in recycling
municipal water, and also reduces what stress vertical farms could place on a given
areas water resources (unlimited supply through the National Aqueduct

This said, its noteworthy that food production isnt the only application for vertical
farming, as plants provide more value than just food production. For example:

Soil is a great insulator. The idea of placing greenhouses on rooftops has been around
for some time, and one of the most attractive benefits of doing so is their insulation
potential. As hot air rises through a building, it escapes through the roof which
increases energy costs. Acting like a blanket, rooftop greenhouses work to keep energy
within the building as much as possible. This concept has already been applied in Italy
with the Bosco Verticale towers in Milan, which were completed in 2015.

Plants are air purifiers. Plants are highly effective in cleaning the air of impurities and
contaminants, which generally are concentrated highest in cities. Air pollution is a
growing problem with urban sprawl, and in the concrete jungles we have built for
ourselves we are breathing air that is often stale and impure. Large-scale urban
agriculture through vertical farms acts as a massive air filtration system on a constant
basis. As allergies and respiratory ailments like asthma have been on the rise, this can
provide a boost to public health both physically and mentally.

China has taken this to heart and has begun construction of the worlds first forest
city, that will integrate plants and trees directly within city buildings. The mastermind
behind this planned
architectural masterpiece is
the same as the Bosco
Verticale towers in Milan,
and once complete by 2020 is
expected to be covered in
one million plants and
40,000 trees

Indoor parks and greener cities are socially beneficial. Regardless of how far we have
come in terms of technological advancement, humans are still fundamentally animals
and connection to nature is important for our happiness, stress levels, productivity and
outlook on life. Especially in the winter, many of our cities are dour, with grey, stoic
buildings cast against a greyer backdrop of a cold sky. Amidst a stressful job, a
declining economy and a dysfunctional society, urban life can be depressing, and social
depression is a social toxin.

Could you imagine how much less stressful life would be if you could just take an
elevator to the roof of your office building and hang out in a tropical park for 30
minutes on your lunch break? Or walk off a city street into a large indoor park where it
was bright, colorful and warm? Having those influences in a persons life can make a
major difference, which not only leads to a happier and healthier society, but also
improves collective hope, and thus the collective drive to seek, build and do greater
things. And as real estate for public parks faces high costs in cities, devoting floors of a
building to a nice public hangout is of a far lower financial order than devoting plots of
prime real estate especially if the park is on the roof.


Indoor farming would reduce a lot of stress on the American breadbasket, requiring
farmers to grow significantly less food than they do today. At first glance, this might
seem like a trouble spot for them, because growing less food means making less money.
So should indoor farms give them cause to worry? Hardly! Thats because growing less

food gives them an opportunity to grow other crops crops that have a higher
commercial value than the kind that makes their way to supermarkets.

For example, instead of growing corn or soy, farmers could instead grow:

Algae for biofuel/plastics. The skyrocketing price of petroleum over the past 15 years
(at least until the discovery of easier shale oil extraction) has led to a surge of
investment in biofuels, hydrocarbons that come from living plants as opposed to their
fossilized remains. However, even though Universal Energy would promote hydrogen
as the nationwide fuel standard, biofuels and existing petroleum reserves can be
devoted to a more appropriate purpose: advanced materials.

We use oil (either petroleum or synthetic hydrocarbons) as a fuel today mostly because
we can, that, and our government subsidizes the fossil fuel industry like a D1
University subsidizes a football team. But all said, oil and hydrocarbons are not that
great of a fuel source compared to other alternatives over the long-term. Where they do
have great potential to shine, however, is through synthetic materials. Today, we
use corn ethanol to make plastics, but corn is not the most effective crop we have at our
disposal. That honor goes to algae.

Several forms of common

algae have properties that
allow for hydrocarbon
production, with the biofuel
company Algenol claiming
it can produce thousands of
gallons of ethanol per acre of
growing space with todays
technology. With Universal
Energy and indoor farms,
this output could rise
substantially. Even better, as
we could grow food indoors,
we could repurpose existing farmland to grow algae, presenting additional benefits:

As algae has a higher density than corn, the potential commercial value of a given
growing area is higher. This would provide a boon to the farming industry, which
would likely be hesitant to support moving large segments of our food production
indoors. Yet since the energy / materials industry tends to be more profitable than the

food industry, what money might be lost by growing food indoors could be replaced by
growing hydrocarbon-producing algae.

For comparison, the Department of Energy estimates that if we were to use algae to
replace petroleum in all respects in the United States, we would need an area of about
15,000 square miles (roughly the size of Massachusetts and Connecticut combined).
Thats less than 1/7th of the space we use for corn, meaning that if we were to move
much of our food production indoors, we would have plenty of space to grow algae.

In saying this, it's worth drawing attention to the fact that this writing has largely
discussed climate change through the lens of resource scarcity. It did so because I
believe resource scarcity, as a function of both climate change and unsustainable human
practices, is a more immediate threat due to resource conflict in the nuclear age, and
also because Universal Energy addresses the causes of climate change directly.
However, in its mention keep in mind that using hydrocarbons as an energy source
generally only contributes to climate change if they are combusted; used in plastics,
carbon release can be minimized. This will allow us to use both hydrogen and oil to
their greatest strengths, with hydrogen for fuel, and remaining petroleum
reserves/hydrocarbon-producing algae for plastics. We would have unlimited fuel, we
would have unlimited hydrocarbons to make plastics, lubricants and materials, and
both would be carbon-neutral.

Algae for supplemental nutrition. Beyond plastics and materials, we can also grow
algae for added nutritional value in food products, with chlorella ranking atop the most
promising candidates. Its name meaning green and small, chlorella isnt particularly
notable at first appearance, looking similar to green muck that grows in woodland bogs.
What is notable, however, is its nutritional value.

As by all definitions a superfood, chlorella (when dried) boasts 45% protein, 20%
lipids (fats), 20% carbohydrates, 10% vitamins/minerals and 5% fiber. This composition,
combined with a high photosynthetic efficiency (how efficiently a plant grows under
sunlight), gives chlorella one of the highest protein yields of any crop. It is for that
reason why chlorella was looked to as a candidate to solve a then-global food crisis
caused by the damage of the Second World War. Yet as that crisis was averted (or
delayed, rather) through an improvement in farming technologies and GMOs, the need
to introduce algae protein into the national food supply waned.

Another contributing factor to interest shifting away from chlorella were technical
difficulties growing it outside of laboratories. Yet those are difficulties that existed with

the limitations of technology in the 1950s, circumstances we do not share currently
especially with Universal Energy. Today, we have the ability to grow a potent
nutritional supplement that can be used to enhance nearly any segment of the food
supply which can also prove useful for nutrition in remote or isolated locations.

Alternative use of genetically modified organisms. As touched on previously, a

benefit to indoor farming is its ability to grow heirloom crops with yields that are
similar to crops in outdoor environments. However, this is not to say that genetic
modification of plants is a negative thing in and of itself, but rather that its benefits are
perhaps best realized in other applications, something that bears special mention in this

Much of the anti-GMO movement has focused on opposing any manipulation of crops
at the genetic level, rather than simply genetic engineering that allows a plant to survive
a substance that is to other plants what weaponized nerve agents are to humans. While
this writing does not maintain a solidified position on the current state of genetic
engineering in our food supply, it might suggest a moment of pause as to the wisdom
of disavowing an entire scientific discipline because a corporation engineered plants
with a genetic immunity to a cousin of organophosphate toxins.

People have been genetically engineering plants for millennia by splicing, cross
breeding and natural selection. We were modifying genomes then, we just werent
doing it under a microscope. That we might today isnt necessarily problematic, its
what we do, and our guiding ethical standards when we do it, that ultimately matter.
So, what if we instead shifted the genetic modification of plants to a different focus? For
example, heres a few ways this could be successful:

Efficiency of hydrocarbon production. Potential yields of hydrocarbon-producing

algae for plastics and chemical stabilizers are already naturally high. However, we
could configure the plant at a genetic level to produce greater amounts of hydrocarbons
than it would normally, improving yield, efficiency and overall value.

Just as importantly, algae could be modified to produce specific chains of hydrocarbons

that are geared for more advanced polymerization such as plastics that have a
significantly higher percent yield and that recycle more effectively than plastics today.
Well go through how this can be possible in more detail next chapter.

Inclusion of bacteria. Algae isnt the only organism that can produce hydrocarbons.
Scientists in several countries have successfully modified the genetics of E. coli bacteria

to produce diesel fuel that is nearly identical to the diesel derived from petroleum, a
benefit that could presumably exist for the synthetic production of other hydrocarbons.

Maximum growth. Beyond genetic engineering for industrial applications, food crops
can be genetically modified in ways that do not raise as many concerns as today. This
might include engineering plants to maximize growth within indoor farms, produce
larger and/or more nutritious products, or be able to optimally operate under a longer
daylight to night ratio. Genetic modifications could also include increased seed
production, or to alleviate concerns of cross pollination with the outside environment,
an engineered disability for an organism to pollinate.


Through its provisions of both food and auxiliary resources, indoor farming delivers
abundant amounts of the fourth critical resource our civilization requires to exist. And
with the implementation of the systems behind this deliverance, backed by the
Universal Energy framework, we would have an effective means to eradicate famine as
we know it anywhere in the world, providing great benefits for social stability and
economic growth. But were not done yet, as there is one final resource that needs to be
provided in order for resource scarcity to be truly defeated, the critical resource
necessary to build our civilization upward. That is the resource that we actually use to
do so: advanced building materials, which well shift gears to cover from here.

The next episode of 3D printing will involve printing entirely new kinds of materials.
Eventually we will print complete products circuits, motors and batteries all included. At that
point, all bets are off.

- Hod Lipson

Chapter Eight: Materials and Recycling

Energy. Fuel. Water. Food. Universal Energy and the systems that it powers can
sustainably deliver all of these resources to effectively unlimited extents. Yet even with
them provided, we still need materials in which to construct our ever-advancing
economy and society. Materials are themselves a resource, and as such cost money
while also remaining subject to the same laws of scarcity as energy resources. Forests
get cut down, quarries run dry and both metal and composites are subject to market
forces driven by the scarcity of other critical resources.

To address this problem, Universal Energys tertiary function is the powering of

systems that synthesize and recycle materials. With these materials, we can build things
better, stronger and less expensively than we can today. This begins with two concepts:
advanced synthetics and superior recycling systems.


Previous chapters of this writing have alluded to technological breakthroughs that have
allowed us to manufacture sophisticated systems on scales and at prices that were
previously impossible. These breakthroughs have also included the invention of high-
performance plastics and synthetic materials. When we think of plastics, we often
imagine substances wed see around our homes or workplaces: grocery bags,
containers, sidings of appliances, etc. These materials, commonly consisting of
polypropylene, polyethylene and/or polyvinyl chloride (PVC), are used in
manufacturing because they are easy to produce at low cost.

Yet plastics are not without their problems. For one, they dont biodegrade well nor
can they be easily recycled. So once were done with them theyre stuffed in landfills or
end up in our oceans, both of which cause varying (often terrible) environmental
damage. Additionally, plastics that can be recycled at all have a low percent yield
the amount of final material produced from the original supply material. That means
that it might take 100 lbs. of source material to make 1 lb. of plastic, which is way too
inefficient to be viable on a large scale, especially if the 99 lbs. of material lost as waste is
environmentally toxic.

Weve made headway in solving this problem, but making plastics that have both a
high percent yield and are easily recyclable is highly challenging. Yet recent advances
have made this less so, and before we get into the materials and recycling methods
made possible by Universal Energy, well take a minute to detail these here.

The first advancement is one thats made Universal Energy possible to begin with:
sophisticated computing. Since computers became prevalent in personal, industrial and
research settings, their performance has increased at a truly exponential rate. Today,
computers are capable of processing data extremely quickly, including models that help
chemical engineers create ever-better synthetic materials.

This becomes all the more possible through quantum computing, a potentially
revolutionary computing method that uses quantum physics to dramatically increase
processing speed. While still in its infancy, scientists estimate that the technology could
soon far surpass traditional computing. This is because computers today can only
process one calculation at a time, whereas quantum computers could, in theory, process
millions of calculations simultaneously. Considering that modern computers can still
process quadrillions of calculations per second even at one piece of data at a time
speeding this up by a factor of millions would allow for nigh-instantaneous data
processing of presently daunting calculations.

What that allows us to do is model material synthesis with more insight, and predict
how to create recyclable materials with a higher percent yield at increased efficiency
and at reduced cost.

Combined with another advancement we discussed last chapter, genetic modification of

algae and bacteria to produce specialized hydrocarbons, this can give us an increased
capability to tailor material synthesis to fit our needs. We would both have finer control
over the chemical composition of source materials, as well as detailed models of how to
manipulate them into forming bonds that produce plastics and other synthetics that
meet demanding performance requirements.

Add in Universal Energys ability to increase energy supply while lowering energy
costs, and we have three self-perpetuating circumstances that can combine to
revolutionize the materials we use to build and improve our world.

And although that future is just on the horizon, were making headway to realizing it
already as todays latest synthetics are impressive in their own right even with current

technology and material limitations. Consider a list of some standouts in this area, and
why they are remarkable:

Nanocomposite plastics. Researchers have discovered that by layering ceramic

nanosheets (really thin sheets made from clay) over each other and combining them
with a polymer that works similar to white glue (the elementary school kind), they will
interlace with each other like bricks at the molecular level and bind together like Velcro
to create a structure as strong as hardened steel. This allows nanocomposite plastics to
have a wide array of potential applications, of which aerospace, transportation, defense
and civil engineering are all examples.

High Strength polyurethane (Line-X). Polyurethane is a type of polymer that has an

extremely high impact tolerance, which makes it durable and useful for shock
absorption. A commonly sold product of this variety is Line-X, which is a nigh-
indestructible spray-on coating often used to line the beds of pickup trucks. There are
plenty of nifty videos on YouTube showing the absurd degrees Line-X can protect
materials (like coating eggs and dropping them off 4-story buildings) that are well-
worth checking out and keep in mind that this is a spray-on material thats
commercially sold today. Imagine what materials we could make in a world with
unlimited energy and a larger, more advanced supply of customized chemicals to make
synthetic materials. Indeed, NASA sure has been.

FR-4. The common name for a high-strength, flame-resistant composite made from
glass-reinforced epoxy, FR-4 is one of the strongest synthetic materials available today.
Not only is it highly resistant to chemicals, ultraviolet radiation and electricity, it is both
lightweight and extremely strong. For comparison, the tensile strength of structural
steel and aerospace-grade aluminum are, respectively, ~40,000 PSI and ~43,500 PSI. The
tensile strength of FR-4 is ~45,000 PSI. This strength allows components made with FR-4
to retain fine detail (such as threads) and be built to tight tolerances, making them
qualified to operate in demanding applications such as aerospace.

Synthetic wood. Advances in polymer
science have brought multiple types of
artificial wood to market. Made from
plastics, recycled organic wood and
other composites, synthetic wood is
used for decks, framing, siding and
supports for millions of structures
worldwide, and is already a $3.4 billion
industry and growing. Fire-resistant and lasting far longer against the elements than
traditional wood, its strength also stands apart, as the image to the left of Ecotrax
synthetic railroad ties shows.

As the material can support the weight of a locomotive, it can be used to replace
traditional wood in the construction of houses, buildings, bridges anything, really.
Cost remains a limiting factor presently, but these are costs that like many other
materials would fall dramatically in a world powered by unlimited cheap energy and
abundant resources, allowing for an indefinite supply of yet another building material.

Graphene. As discussed in Chapter Four, graphene is a material that has great

potential to improve our way of life. As concerns Universal Energy, graphenes first use
was hydrogen storage tanks and its potential within next-generation batteries.
However, this application is far from the limit of what graphene is capable of.
Examples include:

Consumer electronics. As an ultra-strong and ultra-conductive material, graphene can

be used to create sophisticated electronics that are highly durable. The following
images show prototype mobile phones with graphene screens that are both flexible and
thousands of times stronger than todays mobile phone screens.

Beyond mobile phones, tablets and computer screens, electronics made with graphene
can also store large amounts of data in small physical spaces. The following images
show a concept for a new type of jump drive that works like sticky post-it notes.

Graphenes conductivity also gives it high capacities for data transfer, processing some
7,000% faster than silicon. Researchers have conducted experiments that show
graphene antennas can transmit data up to 100 terabytes a second. A high-definition
feature-length film generally ranges from 3-9 gigabytes in size. There are 1,000
gigabytes in a terabyte. Assuming an average size of 6 gigabytes, this translates into a
transfer capacity of approximately 166 high-definition films a second.

Structural material. At 200 times the strength of hardened steel, graphene is

the strongest-known material as well as one of the lightest. Engineering Professor John
Home of Columbia University said in 2008:

Our research establishes graphene as the strongest material ever measured, some 200 times
stronger than structural steel. It would take an elephant, balanced on a pencil, to break through
graphene the thickness of Saran Wrap.

This gives graphene nearly unlimited potential for use in structural engineering.
Imagine an automobile, train or aircraft with a structure that is 200 times stronger than
what we are using today, but reduced in weight by 600%. Or for buildings, one of the
largest limitations to erecting taller skyscrapers is the strength to weight ratio of the
structure: the taller it goes, the weaker it gets. Graphene significantly mitigates this
problem, especially since its melting point, at more than 4,500 C, is greater than steel.
Ballistic vests that stop high-powered rifle ammunition require heavy ceramic plates,
yet a lighter and thinner graphene vest could do the same. And if youve ever dreamed
of flying cars, advanced spacecraft or a space elevator, the largest material obstacle to
making them a possibility can be overcome through graphene.

Filtration mechanisms. As a one-atom-thick lattice as a base material, graphene is
highly non-porous, meaning it can prevent permeation of substances at the molecular
level. This would have notable applications in National Aqueduct filtration, in addition
to filtration systems elsewhere in industry.

Medicine. High strength and conductivity with low weight and reactivity gives
graphene excellent potential for medical applications. Examples include: stents to
prevent arterial restriction, high strength and lightweight casts for broken bones, and
providing the framework to help paralyzed people walk again.

These are just a few of the synthetic materials we can make more easily through
Universal Energy, and future research will undoubtedly discover more. But not only
are we able to make new materials with these advances, were also able to recycle them
to greater ends.


The modern era has seen the invention of extraordinary systems that recycle old
materials into new ones. Yet one of the biggest limitations with recycling currently is
the cost of energy. Universal Energy opens up new avenues to recycle materials,
including materials that have been difficult to recycle in the past. An unlimited source
of electricity, hydrogen fuel and synthetic hydrocarbons can provide heat,
mechanization and chemical synthesis all components necessary to take waste and do
something more socially beneficial with it. More specifically:

Easier plastic recycling. As touched on previously, recycling plastics is difficult today.

We can do it to certain degrees, and can downcycle plastics into hybrid materials
with other uses (for example, taking plastic bottles and turning them into park
benches), but in general, recycling plastics is both expensive and challenging. But
superior engineering of synthetic materials through more sophisticated computer
modeling gives us methods to recycle plastics with more effectiveness than we can

Advanced incineration. Currently, most of our non-recyclable waste is deposited into

landfills, where it is forgotten with the hope that it will biodegrade in a series of
hundreds (but more likely thousands) of years. But burying garbage doesnt really solve
the problem of waste, it just stores it leading to problems once the available space to
bury trash runs out. This is an eventuality that can be avoided through electric waste

Were no strangers to burning trash as weve been doing it for millennia (and Sweden
has figured out how to turn it into energy), but its important to recognize the
distinction between burning trash and completely incinerating it. The heat generated by
a standard fire can reach 1,500 F/815 C, but thats not enough to destroy the molecular
bonds of toxic substances and render them back to their elemental compositions. That
requires heat exceeding 5,000 F/ 2,760 C. Yet an electric arc can reach temperatures
exceeding 35,000 F / 19,426 C, more than enough to break any substance into its most
basic components.

With an unlimited supply of cheap electricity, the primary expenses of electric

incineration would be within engineering and construction not energy costs. And
since modern ceramics and heat shielding are already capable of supporting electric
incineration today, Universal Energy presents the possibility to reduce our trash to
ashes on large scales at acceptable costs, bypassing the need for landfills on large-scales.

Superior metallurgy. Metallurgy has been an evolving science for thousands of years,
allowing civilization to build ever-greater materials from combinations of metallic
elements. To create these substances, weve generally needed three components: base
materials, a forge to contain heat and a source of energy in which to provide that heat.

Building forges that operate at specific heats and pressures has not been prohibitively
difficult for some time, thus delivering the second component is relatively
straightforward. However, the other two components: source materials and heat energy
have been more elusive, or at least more expensive, with the former due to obstacles
inherent to resource acquisition and the latter due to the cost of energy. Yet through
Universal Energy, once again, we solve the symptoms of this problem by addressing the

Extraction of metals from reclaimed products. We build most products on an

industrial scale today, but once they become obsolete the metal used in them has often
not been recycled and repurposed outside of scrap yards. The reasons for this being the
case are familiar in the context of this writing: engineering and financial limitations to
transporting materials to recycling locations, building the systems in which to recycle
them and paying for the energy necessary to power those systems. Also familiar is the
bypassing of these problems by providing unlimited energy at low cost. We can take a
given substance or machine, strip out non-recyclable materials and send them to
incinerators, then melt down whatever substances remain and separate them for use

This isnt a novel concept, as the systems that could accomplish this goal already exist
today. But what Universal Energy does is increase the scale in which they can do so
while concordantly lowering costs. As a result, we can supplement next-generation
manufacturing with recycled metals more effectively than we can today, reducing the
need to acquire metals from mines and mineral deposits.

Superior alloys. As mentioned previously, making alloys is a matter of heat and

material to work with, and unlimited electricity and fuel provides unlimited heat
energy. This, combined with more advanced computer modeling, can improve our
ability to build forges that operate at higher heat ranges at precise temperatures and
pressures. As a result, it becomes easier to create more sophisticated alloys at more
attractive costs. This translates to products, machines and systems being built stronger,
lighter and more reliable than what we can produce today.

Intelligent deconstruction. Many products today are simply too complex to be recycled
cost-effectively. Consequently, once discarded, these products are often thrown in
landfills. Yet reduced energy costs, modern computer systems and novel engineering
methods reduce the expenses surrounding the automated recycling of more advanced
systems, of which consumer electronics, vehicles, ships, buildings and airplanes are all
examples. And as we can improve how those things are recycled, this gives us the
opportunity to advance our manufacturing methods to build products, that at their
core, can be disassembled, recycled and/or upgraded in a modular capacity. This
provides a few distinct benefits:

Reduced costs and material requirements. Beyond removing energy as a significant

expense, the ability to repurpose older models into newer ones spares the
requirement to build new products from scratch. For example, instead of a vehicle
going to a scrap yard once it has fulfilled its usefulness, we could instead engineer
that vehicle to be stripped down to its basic components by a factory and refurbished
into a newer vehicle, a concept that could be applied to essentially any product.

This benefits both consumers and manufacturers. It gives consumers the ability to
exchange obsolete models for a credit toward a new one and reduces the expenses
and sweat equity manufacturers pay to acquire materials and create products. This
contributes to cost savings, and also increases the scale of our manufacturing ability
by allowing us to do more with what we have currently.

Reduced waste footprint. Waste incineration, unlimited energy, indefinite supplies

of synthetic materials and deconstruction-focused engineering reduces the waste
footprint of our manufacturing processes and how we discard products. With
Universal Energy, we wouldnt be building products that are destined for landfills,
nor would we be building products at the expense of the environment. By connecting
our energy production, material procurement, manufacturing and recycling
processes, the lifetime of a product (and its eventual repurposing) is contained from
start to finish reducing natures presence in the equation.


1. With Universal Energy, were able to produce next-generation synthetics that can
outperform even the most advanced materials we have today synthetics that we
can use to build and improve practically anything.

2. With Universal Energy, we can dispose of waste and recycle products far easier
and to greater scales than we can presently.

3. With goals #1 and #2 met, we can now engineer systems and products to be
disassembled, repurposed and upgraded by design, turning products into long-
term investments instead of one-use discardables.

Combined with 3D printing, this can transform the way our society manufacturers
things. Not only do we have the energy and resources to build ever-more-sophisticated
systems, we can remove the cost and engineering barriers to doing so. We can build
things to a level of precision that once cost millions of dollars to accomplish and we
can do it at the push of a button.

Traditionally, 3D printing has allowed companies to rapidly prototype new designs, or

hobbyists to make improved or replacement parts for various projects. Yet as 3D
printing technology advances and equipment costs drop over time, its adoption and
sophistication proportionally increases. Combined with the inclusion of next-generation
synthetics, so also does its potential to build greater and more powerful systems on
larger scales which is the last piece that allows us to land the final blow to defeat
resource scarcity outright.

When I think about creating abundance, it's not about creating a life of luxury for everybody on
this planet; it's about creating a life of possibility. It is about taking that which was scarce and
making it abundant.

- Peter Diamandis

Chapter Nine: The End of Resource Scarcity

Weve previously discussed how Universal Energy is based on a mindset of modularity

and standardization. If you recall, modularity is the idea that a system is designed from
the ground up to be flexible in terms of how it is deployed. A good example of this are
Legos, perhaps the most iconic of childrens toys. Each Lego can fasten to another
Lego using the same standardized method, and everything from simple structures
to architectural masterpieces are built from pieces that connect together in the exact
same way.

Standardized computer ports (USB), AC power cables, audio/visual ports (HDMI) and
so on are extensions of this idea. Thats because standardizing a function in a way that's
modular reduces complications for building things and lowers the bar (and research
and development costs) for manufacturing.

But weve only taken this idea so far. Its common for a product today to use a
standardized power plug or accept the same type of battery, but beyond that things
become more disparate. We saw earlier how most power plants are built as unique
entities they might standardize a doorway, air duct or stairwell, but the system as a
whole is essentially made to order. The same is true with most larger-scale things in our
society. Every bridge built, house constructed, building erected or road paved was done
so as a custom entity one of a kind, every time.

This is because we are presently living in a world with technical limitations that would
make it difficult to build something like a bridge, house or building on an assembly
line. Removing this limitation is the last function Universal Energy is intended to

With an unlimited supply of all critical resources, especially energy, fuel and materials,
we have the building blocks to build as much as we want, however we want. Combined
with sophisticated computing and modeling, 3D-printing with tight manufacturing
tolerances and specialized materials, we can automate construction of sophisticated
systems on a far larger scale. The application of this idea is known as prefabrication,
which in short means building something in a factory and assembling it at a final
location, instead of constructing it from scratch with basic building materials. And its
something weve already made great headway on today.

For example: this is a prefabricated house:

This house was not

constructed at this location
it was assembled here.
There were no workmen
cutting wood for framing,
ceiling joists or building
walls. This house was built
on a factory assembly line in
the same way a vehicle or
television is built, itself a
clone of many others that
came before or after it. It
was delivered in pieces to a
construction site by truck, and was assembled in a manner of days.

This house came with a fully finished interior, with all electrical, plumbing and heating
elements pre-installed beforehand. Should the homeowners decide one day they want
to expand the size of their home, it would be a matter of bringing in a new piece,
removing modular components from the original house, and fastening the new piece
on. If they wanted to move, they could disassemble their house, put it on trucks, and
assemble it again somewhere else. Should they decide to raze the lot and build a
different structure, the house could be disassembled and sent back to the factory for

Essentially, we can now build houses with life-sized Legos.

Prefabricated homes have been growing in popularity, especially since they offer high
efficiency and durability. However, the price of these homes is still comparatively steep.
Today, the costs to deliver a fully finished prefabricated home ranges between $140-
$200+/sqft, considerably higher than the $125/sqft national average for a traditionally
constructed home.

But this price range includes expenses inherent to any fledgling industry: initial
research and development, prototyping, marketing, etc., and those costs have to be
recouped through fewer sales in a small (yet growing) market. Additionally, the energy
and materials in which to construct and transport prefabricated homes is currently a
major expense. Yet with these obstacles mitigated through Universal Energy and the
previously mentioned advances in synthetic materials, we could soon find ourselves in
a position to drop the cost of these houses significantly.

Houses are only one example of the potential benefits of prefabrication, as practically
anything can be built this way: LFTRs, solar roads, multi-stage flash distillation
facilities, hydrogen production infrastructure, Energy Plants, National Aqueduct and
urban vertical farm components, even larger buildings and vehicles for public

To elaborate further, take a look at the following two images. The image on the bottom-
left shows the Boeing Corporation's Everett production plant that can fully assemble ten
787 Dreamliner aircraft per month, which translates to one aircraft every three business
days. The image on the bottom-right shows a 30-story prefabricated building built by
the Broad Sustainable Buildings corporation in Changsha, China, that was assembled
on-site in 15 days. Thats two stories a day finished.

The start-to finish timeline of these four pictures is a period of 15 days.

These exemplify what we can do with todays technology. Adding on Universal

Energy-provided energy cost reductions and improvements in manufacturing and
materials, there are few limits to the possibilities in front of us. We can sustainably
prefabricate essentially whatever we want on grand scales, and we can build it better
and less expensively than we can today. In doing so, we can advance our economy,
society and infrastructure, but we can also ensure shelter as a resource, which brings us
back to housing. Thats because at this scale of manufacturing prowess, building small
homes on assembly lines becomes trivial.

For example, these homes are made from shipping containers the same kind that is
used to transport goods on trucks and cargo ships. Shipping containers are so plentiful

and made so inexpensively that in some cases its actually cheaper to have goods sent
from China to the United States on new containers than it is to ship the empty ones
back. This leaves thousands of containers nationwide to be left near shipping yards,
prompting innovative architects to use them as housing structures:

Shipping container homes can also feature multiple containers. The home on the
bottom-right was fully constructed for less than $40,000.

As each container is made from steel, they are extremely resilient and boast high load
strength. Additionally, shipping containers are naturally easy to transport. This,
combined with a high availability of empty containers allows a single-container home
to be delivered for between $20,000-$40,000 with todays energy, shipping and
manufacturing costs. If energy and material cost reductions are applied by way of
Universal Energy, this figure would drop substantially.

As a result, we can have a cost-effective method of manufacturing and transporting

housing structures practically anywhere. But what does this mean, really? Most
importantly, it means that for a modest investment we can provide quality living spaces
for anyone who needs a home, such as:

Victims of natural disasters. As events like Tropical Storm Sandy, Hurricane
Katrina, tornados, flooding and wildfires have occurred, thousands to millions of
people have been displaced from their homes. This traditionally leads to social
problems, including depression, social unrest, higher crime, reduced economic
activity, etc., all of which tend to perpetuate each other.

While temporary FEMA trailers have provided relief to some extent when disasters
have struck, these shelters are leased for free only temporarily, and at $70,000
each cost twice as much to produce than a shipping container living space of similar
size does currently. With them, we can provide homes that come prefabricated with
heat, hot water and a comfortable, warm and private space for people who have lost
everything a model that no doubt could be applied globally.

Indeed, during the aftermath of the 2010 Haiti Earthquake, roughly 105,000 homes
were destroyed with another 208,000 badly damaged. International governments
devoted millions of dollars to help rebuild, with some $93 million going to build
some 2,600 homes roughly $36,000 a house. Approximately $13 billion was donated
to Haiti in the aftermath of the earthquake in the form of international aid, and much
of Haiti today looks little different from what it did after the earthquake. Had we
been able to purchase shipping container homes, at the price of $30,000 each, it
would have cost $9.3 billion meaning that wed have provided living spaces to
replace every damaged and destroyed home with another $3.7 billion to spare.

Low-income/fiscally reserved individuals. The average price for a single-family

home in the United States is nearly $300,000 a considerable obstacle for even the
median wage earner in this country, and half of us are worse off than that.
Consequently, many families rent their living space, and as rent prices have largely
increased in inflation-adjusted dollars over the past 30 years (whereas the median
income has not), millions of American families are increasingly placed in precarious
financial situations.

Homes made by multiple shipping containers can cost significantly less than homes
made via traditional construction methods, with rates of $75 per square foot or less
not unheard of. This allows people to take advantage of the equity value of home
ownership at far lower price points than is possible today. Perhaps a family cant
afford to buy a house and are forced to rent at the expense of their ability to save
money or invest in something they own.

Conversely, perhaps a family wishes to purchase a modest home on a larger plot of
land with more cash on hand as opposed to a more expensive house with a heavier
mortgage. Shipping container and prefabricated homes (once their price drops
sufficiently) make either possible.

Homeless individuals. There are an estimated 565,000 homeless people in the United
States currently, and every year the Federal Government spends approximately $4.5
billion on efforts to reduce that number. Assuming the price of $30,000 for an 8 x 40
fully furnished container home, this means that we could provide a comfortable and
private living space for every homeless person in this country for $19 billion. Thats
what the Federal Government spends on preventing homelessness every five years
(roughly 3% of the defense budget for one fiscal year).

Its worth mentioning that providing a private living space to someone so they arent
on the streets isnt necessarily going to solve the problem of why they became
homeless in the first place, as addiction and mental illness are often underlying
causes. Yet that it is now within our power to afford anyone a place to live and
rebuild their lives is key to solving a major social problem. Corresponding social
programs would of course have to be established. But we can now ensure that the
lowest level a person in the United States (and perhaps abroad) can fall to is an 8 x
40 private living space with heat, hot water and three hot meals a day a historical

Being able to cost-effectively provide comfortable living structures to anyone, especially

the most impoverished members of society, is an accomplishment of major significance.
It represents a massive leap in our social advancement, and more critically, its the final
nail in the coffin of resource scarcity. With all of these aforementioned systems
combined, we would have the means to synthetically produce everything we need to
exist: electricity, fuel, water, food, advanced building materials and now shelter, and we
would have the means to produce them far less expensively than we can today.

Indefinite and sustainable production of the crucial resources our civilization requires
to operate would be revolutionary to our way of life, and it would completely change
how our society interacts with itself and others across the globe. Most importantly:

This would allow us to reset our relationship to nature. Since we evolved from hunter-
gatherer tribes and started building societies, the environment around us has paid the
price. We have razed forests, destroyed ecosystems and even changed our planets
climate. The rise of human civilization, in and of itself, has been an extinction-level

event. Universal Energy allows us to chart a different direction because it can provide
every resource that we need to exist and advance, dramatically reducing our reliance on
nature and the damage we are doing to it.

Its true that we might increase the extraction of limited materials due to our
advancement. But with the implementation of superior recycling and manufacturing
systems, this is an element that can be minimized and would ultimately pale in
comparison to the other environmental benefits we would see with Universal Energy.
We would no longer need to cut down forests for building materials, extract finite
sources of oil and gas for energy or devote swaths of land for farming. We would no
longer need to deplete natural water sources for drinking, industry or agriculture. We
would no longer need to pollute our atmosphere with coal plants or destroy waterways
with toxic chemicals or hydroelectric power stations.

We would no longer need to do those things because we would already be provided

with what those means deliver as a given. Over time, this would allow nature to return
to its natural state, and heal to a point before our hands scarred it.

And this would remove the reason why we fight. For thousands of years, for
thousands, we have butchered each other. We have put swords and arrows into bellies,
fired bullets, dropped bombs, raped, burned, tortured and exterminated our brothers
and our sisters in every horrific manner that we could think of. As we did so, we have
told ourselves lies, and allowed ourselves to believe that we were justified in killing and
dying by the millions for causes that boiled down to nothing more than resource
scarcity and the pursuit of the money, power and economic might it bestows on the
winner of its zero-sum games.

We have believed these lies and lived with these horrors because we have had no other
choice, and whether near or far from the dirt, the begging, the screams and the blood,
we have been powerless to prevent any of it from happening because we had no means
to truly change the way the world worked. Now we do.

Today, the zero-sum game no longer has to exist. Like the quill, the steam engine, the
VCR and the floppy disk, technology can simply evolve beyond it. No matter how
much energy, water, food or materials are consumed by a given society, there will
always be more. One can not take too big a piece of the pie, because by design the pie
will always replenish at a rate faster than that of consumption. We may well perhaps
find reasons to continue our lust for warfare in the future but resources, and the
economic damage caused by their scarcity, will never again be its harbinger.

From there, the world as we have known it changes. The constraints of our current
model are lifted, allowing us to advance our existence on our own terms and write our
own rules as we go along. As we do so, one by one, so many of the problems of our
time (and the time before ours) will be subsequently removed, as resource scarcity and
all of the ugliness brought by its existence would thus be extinguished. And through
the systems we built to save ourselves, we would be unburdened from the weight of
those problems and hindrances, empowering us as a civilization to move forward
without them and build a better world upon their ruin.


Universal Energy is first and foremost a framework, and its ultimate purpose is to make
a new model for our society to operate within. This model is not based on economics
its based on technology. It doesnt use money to pay for social programs that mitigate
social problems. It uses money to build systems that make those problems irrelevant.

10,000 years ago, making fire was a problem. Today, you light a match. 300 years ago,
transportation over distance was a problem. Today, you hop in a car, bus or plane. 100
years ago, disease was a problem. Today, modern medicine can cure all but the most
severe ailments. And today, energy and resources are a problem.

Through technology, they dont have to be a problem tomorrow.

For millennia, resource scarcity has been a central, dominating fixture in how we
interact with each other and operate as a civilization, chaining us and our economy to
its restrictions. And with its chains removed, the entirety of our social strength and
economic might can be devoted to improving our society and all that exists within it, for
through Universal Energy and the models that it underwrites, we now have the power
to transform our civilization into something completely new.

By dramatically lowering the costs of energy, resources and materials while improving
the quality of life for everyone, the costs of doing effectively everything fall, as do the
amount of resources that have to be devoted to address social afflictions. This frees up
state funds that could be devoted to social advancement, the same with industry, which
would have significantly increased capabilities to build ever-greater accomplishments.

In a scarcity-free world, we would have nigh unlimited potential to discover, create,
construct and achieve. That world, and the economy it would power, is a future that we
can begin building today. And that, above all else, is a future worth having.

A future worth having. That is what we strove for once, and its something we can
strive for again. And with Universal Energy behind us, I believe that we can. But I also
believe we have lost something that we once cherished: the drive to build great and
amazing things. Minus the weapons systems that we devote trillions of dollars toward,
that collective drive has been forsaken.

The past six decades saw us build the interstate highway system, put a man on the
moon and invent GPS and the internet. We didnt care about difficulty or political
opposition we achieved those goals because we could and because they proved to
ourselves that we were worthy of our pedigree as a people.

Today, amidst the backdrop of our crumbling roads, collapsing bridges and aging
skyscrapers, we are living within a decaying testament to the greatness we once sought
and collectively built. So thus we sit here despondent, reduced to bickering amongst
ourselves about how were going to pay for anything of actual social value. That is not
who we are, and that is not where we came from. We deserve a better future than
Ozymandias, and this is how we may see it realized.

We can see it realized because Universal Energy has a final function, one that only
becomes possible once we reach this stage. Its first goal is to solve resource scarcity, this
is true. But Universal Energys true purpose is to take us well beyond that. Its goal,
ultimately, is to serve as a vehicle for our next stage of evolution as a civilization.
Because on the platform that these technologies provide, it allows us to devote our full
strength to advance our society even further and build a world we had thought
possible only in dreams.

Whatever good things we build end up building us.

- Jim Rohn

Chapter Ten: Advanced Infrastructure

So far, this writing has concentrated primarily on the solutions to resource scarcity and
the benefits they provide our way of life. We see that Universal Energy gives us an
abundance of critical resources alongside unlimited energy and fuel, which when
combined together presents the ability to prefabricate systems both far less expensively
and to far greater scales than we can today.

These are incredible goals to reach because they solve vexing problems that have
dogged us for millennia. But beyond solving problems, reaching these goals also opens
doors for us ascend to a higher scale of living.

Ascend is a word used here with specific intent. Recall that humanity has been around
for only about 200,000 years. Yet we ascended to actual civilization only in the past
5,000 years, and we ascended to modern civilization only in the past 100-150. From
the year 200,000 B.C. until the mid-1800s, the fastest a human could travel was on
horseback. Yet by the start of the 20th century we had the train, automobile and the
aircraft, and we landed on the moon less than 70 years later. The light bulb, internet,
cellphone, computer, skyscraper, satellite and spacecraft were all invented in roughly
the past 1/2,000th of our history.

We didnt achieve those things through luck, we achieved them through ascension
namely technological ascension. Implementing Universal Energy is the next level of that
ascension, yet once accomplished, we would be spared the hindrances that have been
holding us back for millennia. We would have unlimited energy and resources. We
would have the means to indefinitely synthesize materials as products of that energy
and those resources. Just as importantly, we would be spared the social consequences of
scarcity that have consumed our focus from day one.

With those benefits combined, we can ascend once again, and build systems and
achieve goals that advances our civilization to the next order of magnitude.

To explain what I mean by that, I think it would be helpful to introduce a concept that
philosophers, scientists and futurists refer to as scales of civilization, also known as
the Kardashev scale. These scales act as a quantifiable metric of how advanced a

civilization has become or can become in the future. The original model (created by
Soviet astronomer Nikolai Kardashev in 1964) had three types:

Type I: a civilization that sources its energy and resources from its planet.
Type II: a civilization that sources its energy and resource from its star.
Type III: a civilization that sources its energy and resources from its galaxy.

Using the Kardashev Scale as a base, this writing proposes an expanded model with
greater nuance to illustrate our historical progress and identify what heights we could
possibly reach in the future. This model is referred to as the Ten Tiers of Civilization.


Tier 1: Fire and Stone: the most basic form of civilization: control of fire and the ability
to craft stone tools, subsisting exclusively on a hunter-gatherer diet. This tier represents
approximately 95% of human history.

Tier 2: Agricultural: the ability to grow crops and raise livestock, accelerating
population growth. Social hierarchies/customs form and basic metallurgy is discovered.
The possibility of organized conflict becomes a fixture of life. Humanity reached this
tier during the Neolithic Revolution, around 10,000 B.C.E

Tier 3: Pre-industrial: command of stone and wood with a basic understanding of

math, science and astrology. Language and law are established, as are cities, borders,
nationalism and diplomatic relations between states. Potential for conflict is high. This
tier was reached in Mesopotamia, roughly 3,000 B.C.E

Tier 4: Industrial: complex machines are invented, including mechanized assembly and
transportation systems. Conflict carries consequences of increased severity. We reached
this tier during the Industrial Revolution, approximately 1760.

Tier 5: Atomic: civilization discovers nuclear energy and has the ability to build large-
scale agricultural systems and social infrastructure (highways, airports, etc.). Population
grows exponentially. Potential for resource conflict increases, as does the potential for
mass destruction (and genocide) as a result. We reached this tier on 16 July, 1945 when
the first atomic bomb was detonated.

Tier 6: Orbital: civilization can defy gravity and even orbit. Electronics and globalized
communications emerge. Transportation over terrestrial distances becomes trivial.
Population continues to grow exponentially. Potential for resource conflict is extreme,
which for the first time can potentially be an extinction-level event due to nuclear
arsenals and global delivery mechanisms. We reached this tier on 4 October, 1957 at the
launch of the first satellite. This is the tier we are in now.

Tier 7: Ascendant: civilization has developed technology capable of synthesizing

unlimited energy, resources and materials, thus ending resource scarcity and the
potential for resource conflict. Maslows needs are met, neutralizing most social
problems and stabilizing population growth, creating a harmonious existence that is
environmentally sustainable. In turn, civilization is able to devote the entirety of its
resources to social advancement with more sophisticated infrastructure. This is the tier
Universal Energy brings us to.

Tier 8: Transcendent: civilization has crossed the biological threshold and is able to
store and transport consciousness outside of a physical body of flesh and blood
(sophisticated brain to computer interface). Artificial sentience exists and both biomass
and bionic structures can be synthesized effectively, leading to the possibility of
synergy between organic and synthetic life.

Tier 9: Interstellar: civilization has reached the mastery of planetary existence, and
becomes capable of inhabiting other planets. Intersolar and interstellar transportation is
invented, as is greater command of nanoengineering (Barrow's Type V-minus).

Tier 10: Intergalactic: civilization is capable of deep space travel and can artificially
create habitable worlds. A hypothetical Tier 10 civilization would furthermore
command a comprehensive knowledge of universal physics (Barrow's VI/-minus).
This tier in concept is represented by precursor civilizations in modern science fiction.

Putting aside the use of science fiction as a means of elaboration, we first and foremost
see that humankind has ascended at an accelerating rate. It took us ~190,000 years to go
from Tier 1 to Tier 2, yet only 12,000 years to go from Tier 2 to Tier 6 the tier we
remain in presently. And while all of this is an impressive reflection of our capabilities,
weve only come far enough to be forced to take a leap, for a critical attribute of Tier 6 is
that it is inherently precarious.

Due to exponential population growth and the environmental changes and resource
scarcity that comes with it, a civilization can only stay in Tier 6 for a limited time. It

either ascends, or it falls to resource conflict potentially an extinction-level
consequence in the nuclear age. But we also see that there lies great potential for our
future in terms of what we may be able to accomplish. Beyond social and economic
harmony, a scarcity-free world provides the catalyst for our ascension to a Tier-7
civilization, a goal I hold no greater hope for. Universal Energy gets us there on the
energy and resource end. Yet we still need another angle covered to get there on the
social advancement end, involving what this writing refers to as Advanced

Simply stated, Advanced Infrastructure is the next evolution of our social framework in
terms of technological ascension, a scenario that is likely familiar to many of us by now.
People alive today have seen dirt roads turn to highways, propeller planes turn to
commercial jetliners, rotary phones turn to smartphones, 8-bit computer systems turn to
inexpensive laptops that have many times more computing power than all of NASA did
during the first moon landing, and perhaps most impressively, the construction
of CERNs Large Hadron Collider.

Technological ascension has caused problems, true, but it has also provided many more
solutions and breakthroughs all the more so in a society powered by Universal
Energy. Of the examples how, the most important are within civil engineering,
transportation and aerospace, and well elaborate each one in that order.


Energy and resource production are vital to our society, yet those functions are not the
only things that make our society possible, nor are they the sole factors to what makes a
society strong or great. That honor in large part goes to social infrastructure: what we
can build, how we build it and how long it lasts. These are areas that Universal Energy
can improve and even revolutionize to great public benefit, which is especially
important since these areas have seen plenty of neglect over recent years.

Of the applications as to how this can happen, the first concerns public works projects
in general, namely how Universal Energy can allow them to execute not only better,
faster and less expensively but also on larger scales.

To begin, we recall that repairing the decaying infrastructure across our nation is
expected to cost trillions of dollars with todays tools and methods, even in the most

conservative estimates. These are repairs that need to be made, yet in many ways we
dont have the money to pay for them (saying nothing of the lack of political
willpower). However, as with energy and resources, technology provides an
opportunity to solve the problem for us by leapfrogging limitations cost-effectively.

How exactly? First, lets cover some givens:

Most heavy machinery today is powered by diesel fuel, which makes fuel a large
expense of any construction project. And while diesel engines have legendary
reliability, other functions (pumps, cooling mechanisms, belts, hydraulics) generally
do not, the failures of which lead to delays and additional costs.

Conversely, electric construction equipment is mechanically simpler and avoids

many of these complications while also delivering the same standard of performance
as their diesel counterparts. Also, as electricity (and hydrogen) sees dramatic cost
reductions by way of Universal Energy, energy is reduced significantly as a
construction expense, especially since the expenses of fuel transportation, storage
and security are also spared.

Another major construction expense is building materials, which command large

percentages of project budgets. As a result, cost cutting efforts can lead to the
selection of cheaper materials at the expense of quality, contributing to the kind of
infrastructure decay were seeing presently.

While perhaps not to the extent of energy, material costs would be reduced to
various degrees by Universal Energy, allowing construction projects to procure
better materials for less. This would enable us to build lighter and stronger structures
at lower prices than we can today, and as a structures maximum size is limited
largely by strength-to-weight ratios, increase the scale of what we are capable of

Of the advancements and cost reductions summarized, their value becomes greater
than the sum of their parts once computer modeling, 3D printing and factory
prefabrication are engaged. These technologies have been around for only the past
decade or two, meaning that the overwhelming percentage of structures in our
society were built without computer aid, and anything before the late 1970s didnt
even use a calculator.

Today, anything we build can take advantage of architectural software that allows us
to design structures virtually, providing engineers with 3D representations of what
theyre constructing along with highly accurate predictions of material requirements
and load/scale limits. This information can be used to prefabricate structures on
larger scales, especially if they are engineered with modularity and standardization
in mind. We saw previously how we can do this to build houses, commercial
buildings and advanced energy systems, but this concept applies to larger-scale
construction as well.

We can prefabricate jetliners and LFTRs, why not bridges, tunnels, apartment buildings
and skyscrapers? Aerospace-grade engineering carries the highest requirements for
quality and reliability, and today we can assemble a complete 777 jetliner every three
business days. In a world powered by Universal Energy, what else could we build
and to what scale?

Within civil engineering, examples include:

Next-generation roads, bridges and tunnels. Paved roads, highway networks, bridges
and tunnels rank among the greatest marvels of human engineering, revolutionizing
travel, transportation and commerce on scales small and large. Earlier, we saw how we
could advance roads through solar surfaces, allowing us to generate tremendous energy
from roads while significantly reducing the time and expense surrounding road

Yet this is not the limit to what we can accomplish in this area. In many ways, roads are
only as useful as their ability to overcome geological obstacles, something made
possible only through bridges and tunnels accomplishments that we dont often think
about when impressive structures come to mind. This is a perspective that warrants a
second look, as their value is greater than we might think. For example:

While not the longest bridge in the United States, at 4.8 miles the Chesapeake Bay
Bridge is one of the most important, as it connects Delaware and Marylands Eastern
Shore with the Baltimore-Washington Metropolitan Area. Approximately 25.6
million vehicles travel on it every year, each one saving time and fuel that would not
have been saved had the bridge not existed.

How much? Assuming each of these 25.6 million vehicles traveled between
Washington, DC and Dover, Delaware, they need drive only 93 miles for 1.8 hours if
they use the bridge. If not, they would need to drive 134 miles for 2.75 hours via I-95

(according to Google Maps). This means that over the past 10 years (assuming
consistent traffic and 21 miles/gallon fuel economy), the Chesapeake Bay Bridge has
collectively saved motorists a total of one billion miles of driving distance, 224.3
million hours of driving time (2,776 years) and roughly 500 million gallons of fuel.

The Colorado I-70 corridor splits the Rocky Mountains with a highway, allowing
motorists to avoid slow and precarious mountain passes. The corridor is made
possible through the 1.7 mile-long Eisenhower-Johnson tunnel, of which an
estimated 276 million vehicles have passed through (as of 2009) since it was
completed in 1979. It takes approximately four hours to travel the 235 miles on I-70
from Denver to Grand Junction at the opposite end of the Continental Divide. Yet
without the corridor, it would take approximately 8.6 hours to travel the 432 miles
via U.S. route 40.

To put those numbers in perspective, the highway and tunnel has saved each vehicle
4.6 hours of driving time and a driving distance of 197 miles. As roughly 12.4 million
vehicles travel through the tunnel annually, well conservatively assume that from
1979-present, a total of 400 million vehicles have traveled through to Grand Junction
at an assumed average fuel economy of 21 miles per gallon. That would mean this
tunnel system has collectively saved drivers 79 billion miles of driving distance, 1.8
billion hours of driving time (210,000 years) and 3.7 billion gallons of fuel.

Taken together with their given assumptions and time windows, these two public
works projects, alone, have collectively saved motorists a total of 80 billion miles of
driving distance, 213,000 years of driving time and 4.2 billion gallons of fuel.

Yet however impressive as this is, these projects were built with technology from the
1950s-1980s a far cry from what we have available today, which in itself is a far cry
from the capabilities we would have available through Universal Energy. And with
those capabilities, we would be able to increase the scale of what bridges and tunnels
we can build, bypassing land features and connecting land masses in ways that were
never before possible. How? Lets look at two concepts that are commonly known as
megabridges and megatunnels:

Megabridges: as the name suggests, a megabridge is a bridge of large scale. Aided by

construction with 3D printing and prefabricated manufacturing, bridges can be built
using standardized pieces that are assembled at modular junctions, instead of spot
welding the structure by hand each time. Built with stronger and lighter-weight
synthetic materials, a megabridge can span longer distance and support a wider frame

(thus more lanes), as well as heavier loads. Megabridges also can enable travel of both
road and rail, increasing diversity of use and thus overall social utility.

There are megabridges already being planned today, with the previous two concept
images respectively representing the proposed Fehmarn Belt Fixed Link (connecting
Germany and Denmark), and the Sheikh Rashid bin Saeed Crossing megabridge in
Dubai, which when completed will be the longest arch bridge in the world. But none of
these projects are yet able to utilize large-scale factory prefabrication nor next-
generation synthetics, meaning they are ultimately constructed ad-hoc with less
attractive material options than would be possible with Universal Energy. Future
megabridges can avoid these constraints, however.

Instead of custom-engineering every component, the bridge can be designed on a

computer with mechanical calculations performed automatically before being sent to a
prefabrication facility that constructs components that can be assembled on-site.
Imagine if construction crews didnt need to pour concrete, lay cable and steamroll
asphalt? What if they instead could take prefabricated pylons, platforms, support arches

and solar road panels that are built to
standardized measurements, and
assemble the bridge like a hobby kit,
just on a larger scale?

This approach allows architects to

expand their vision, as it reduces
several of the problems with modern
construction and material sciences
and not just through price, either. As civil engineers can hash out the technical details of
a bridge with ever-more sophisticated software at the design stage, it could be rapidly
determined what it would take to extend the bridge to greater scales of size should the
materials and manufacturing methods be present. As a result, their efforts could lead to
a day where bridges eight to twelve lanes wide at lengths upwards of 100+ miles enter
the realm of possibility.

Megatunnels: the megatunnel is the evolution of underground or underwater

transportation structures. Of the megatunnels existing or in planning today, perhaps the
best examples are the 30-mile Channel rail tunnel connecting England to France, the 33-
mile Seikan tunnel connecting the island of Honshu with the island of Hokkaido
(Japan), and the 35-mile Gotthard Base Tunnel under the Swiss Alps.

These tunnels are rightfully considered among mankinds most impressive

accomplishments. However, in regards to underwater tunnels they share a trait that
future tunnels might not need to share: they were built under the sea floor as opposed
to on top of it. Underwater tunnel construction today requires digging beneath the sea
bed because this is the only way we can keep the tunnel both reliably dry and
structurally sound. And while the many bridges engineers have built over time have
made them experts on anchoring pylons to the sea floor, building a submerged tunnel
cost-effectively requires a material science capability that we currently lack. Through
Universal Energy, this would change.

In the above concept, prefabricated sections of tubes large enough to support trains are
connected over a standardized series of mounting brackets fastened to the sea floor.
Each piece would come with sealed bulkheads, and like the pipelines of the National
Aqueduct would be installed modularly, allowing for a rail tube of effectively indefinite

Its noteworthy that there are unique complications to building submerged tunnels that
are not present with bridges, namely the presence of extreme water pressure. Building
a submerged tunnel between England and France for instance is readily possible as the
water depth of the English Channel doesnt exceed 150 feet. Building a submerged
tunnel from Tokyo to Beijing or London to New York is far more difficult as water
depths are thousands of feet, where pressures are so great that steel structures can
crush like a paper bag.

This would make lengthy tunnels anchored to the sea floor in deep water significantly
more elusive, saying nothing of preventing any sort of breach to the surface in event of
an emergency. As a result, engineers hoping to one day build submerged megatunnels
have considered tunnels that float near the oceans surface through a series of
buoyancy control mechanisms and tethering cables to the ocean floor, which would
keep the tunnel close to atmospheric pressures, as exemplified by the following concept

As a structures weight is different when submerged as opposed to on land, the
buoyancy of these tunnels can be calibrated to maintain high degrees of stability, strong
enough to support vehicle and even high speed rail travel. Norway is already
considering building submerged tunnels to cross fjords in the short future, a model that
could be extended further through more ambitious projects over larger bodies of water.
Universal Energy-underwritten energy cost and material advancements only bring this
possibility closer to reality.

Luminal communication networks. While perhaps not to grandiose scales in physical

terms, the information networks we have built over the past three decades rank among
the most advanced infrastructure in history. Yet these networks are becoming ever-
more dated and second-rate, even though Americans pay far more for internet services
than other western countries, and what we receive in return is of reduced quality.

This problem is due to a couple of factors (including corporate monopolies and broken
politics), but one of the most significant is distance. The United States is a large country,
and distance presents challenges to providing high-speed internet nationwide at low

cost costs that have to be paid again once outdated technology needs to be updated. A
solution to this problem is present in solar roads. We previously saw how Solar
Roadways conduit channels are designed to run utility lines, including those for
communication. The same is also true of water transportation pipelines of the National
Aqueduct. This gives us a natural platform to run internet cables over any distance
effectively, as the generated electricity from water pipelines and solar roads could
power amplification systems to prevent signal loss.

Instead of traditional cables that transfer data through copper wires, we can now install
fiber-optic cables that transfer data through fiberglass, providing much faster transfers
over greater distances. How much faster? A normal high-speed internet connection
today will yield between 10-30 megabits per second whereas a fiber line can exceed
1,000 megabits per second, so 30-100 times faster.

Google is already making headway on implementing the next generation of internet

through its Google Fiber program, currently planning on rolling out service to 34 cities
within the next few years. But thats just the start of how extensive ultrafast internet
could become. If internet service were embedded throughout national road networks,
wed effectively turn the country into a giant antenna. This possibility becomes
especially significant once Universal Energy-driven cost reductions are applied, because
running fiber cables through above-ground energy infrastructure is far easier and less
expensive than todays method of running cables underground. As a result, this makes
it easier to provide internet service as a municipal utility, dropping costs even further.

If municipal internet were provided throughout road networks, we would effectively

have nationwide wireless internet, making existing technologies obsolete. We wouldnt
need cellular service because we could make and receive calls, video chats and SMS
messages via comprehensive Wi-Fi. People and businesses wouldnt generally need a
dedicated line because they could connect to the local fiber service thats far more
superior than most anything we have available today. Emergency services wouldnt
need radio broadcast equipment as they could instantly communicate through wireless
communications. And so on.

Legacy services such as cellular and radio could still of course exist as fallbacks or in
case people still wanted to purchase them separately, but with comprehensive
municipal internet made possible by Universal Energy, they wouldnt be requirements,
nor the only options available to consumers in a given market. This would consequently
provide the backbone of our next level of advancement into the information age, not
just for a segment of the population, but for all of us.

Large-scale infrastructure. Weve so far paid a good deal of attention to the concepts of
prefabrication and 3D-printing, specifically how we can use them to build advanced
systems quicker, better and less expensively. Megatunnels and megabridges are good
examples as to how we can apply those concepts to larger-scale infrastructure, but there
are promising applications beyond that.

Last chapter, we saw how its possible today to prefabricate houses and even tall
buildings, recalling one example of a building that was fully assembled in 15 days. Yet
however impressive as that is, it still reflects todays technology and material
limitations. These would not be present with Universal Energy, allowing for expansions
of scale. The natural extension of this expansion would incorporate larger buildings,
such as the prefabricated structure shown below:

This housing complex in Abu Dhabi looks similar to any industrial-grade construction
project, with the exception that nearly every component of its assembly came factory
prefabricated. Including the concrete foundation, ground leveling, finish work and

connection of utilities, this entire 2,580 m2 (27,770 square foot) structure was completed
in just four months from June to September, 2008.

For comparison, the average time to construct a single family home in the United States
is between 6-11 months from the issuance of a building permit. Thats more than twice
as long to build a structure roughly 1/10th the size of the above housing complex.
Structures such as these are attractive options for reducing housing shortages, a
problem that is expected to increase as billions of people continue to migrate to cites.

Then we have buildings on even larger scales. We saw from the previous chapter how
Broad Sustainable Buildings assembled a 30-story tower in 15 days. But thats only a
pioneering example of the potential of prefabricated structures. Broad Sustainable
Buildings has since outdone themselves by building a 57-story skyscraper in nineteen
days, which at three stories per day is 33% faster than their previous performance.
YouTube How to build a 57 floor building in 19 days for a time lapse video.

Broad Sustainable Building's future plans grow even more ambitious. They had
originally planned to erect a 220-story skyscraper called "Sky City," that was intended to
house more than 4,000 families and also feature office space (nine floors), restaurants
(30+ floors), basketball courts (six), tennis courts (10), and junior and middle schools. If
built, it would be one of the tallest buildings in the world, and the estimated
construction time was a period of 90 days. However, the project was delayed as the
original building site was found to be too environmentally sensitive, and Broad
Sustainable Buildings has publically stated their intent to build the project in another
location. Considering their past accomplishments, one would be forgiven for erring on
the side of optimism at their future success.

It's important to keep in mind that these are simply the first variants of large-scale
prefabricated buildings. If we think back to how far we've come in other areas of
technology in just the last 15-20 years, there's no telling how much more advanced this
type of construction can become in the future especially if Universal Energy-
underwritten advancements were incorporated. Modular, standardized and
prefabricated construction has huge potential to revolutionize how we build things,
allowing us to erect structures far larger and far faster than we can today. Combined
with everything else weve discussed thus far, this stands to transform humanitys
approach to cities, and how we live within them.

The Supercity. The evolution of cities have well-marked our advancement as a

civilization, and as they have evolved so have they expanded in size and sprawl,

drawing people by the hundreds of millions to their attractions and culture. Yet in
terms of command of technology, their sophistication has accelerated only recently, as
provisions like running water, electricity or citywide mass transit didnt exist on a large-
scale until the early to mid-1900s, even in areas like Manhattan.

Today, even the most modest city dwellings have those amenities, as well as means of
transportation and communication that would have been unthinkable for the past
99.99% of human existence. In several respects today, the lowest income classes live
better than all but the wealthiest did throughout history. It was technology that made
this possible, and combined with the other aspects of civil engineering weve reviewed
thus far, technology can take us to the next stage of city living coming to a concept
commonly known as supercities.

Conceptually, a supercity is an urban center that has advanced its population,

sophistication and modernity to provide an unprecedented quality of life. As it has no
fixed definition within the public lexicon, well define a supercity as having all of the
following six criteria:

1. Population: a supercity has a total population of 10 million or greater, or the ability to

readily scale to support that population. This is the only requirement that a supercity
shares with a megacity, as a megacity is presently defined only by having a population
of greater than 10 million people.

2. Energy and resources: although integrated with external power grids and resource
production systems, a supercity is able to produce the majority of its resources through
internal infrastructure (Universal Energy).

3. Utilities as public provisions: taking advantage of inexpensive energy and simplified

installation of utilities through solar roads, a supercity provides electricity, water and
heat, as well as high-speed internet, as publicly funded municipal services. The city
provides these utilities as a function of the municipality, funded not-for-profit via local

4. Advanced construction and modernity: building new structures and repairing existing
ones are top priorities of supercities and their leadership. Solar roads are the primary
means of road construction and maintenance. Buildings, bridges and tunnels are
rapidly constructed using modular and prefabricated methods with high energy
efficiencies. Supercities, therefore, reflect a high percentage of new, modern

5. Transportation: a supercity has implemented the infrastructure for advanced
transportation technologies described in the next section of this chapter: maglev
rail/Hyperloop, autonomous vehicles, etc.

6. High quality of life: a supercity features a happy, healthy and strong population as a
core philosophy, and as such provides excellent education, healthcare, employment and
recreation at low cost. Very small percentages of the population fall below any state-
defined poverty line, and the quality of life index ranks consistently high, as does life
expectancy and median income.

As to how these requirements can be met, recall that polarized distribution of

wealth becomes a far less serious social problem if all necessities of life can be
inexpensively provided through technology. This contrasts with todays urban
environments where most areas range in quality from fantastic to poor, usually with
good and mediocre mixed somewhere in between.

The fantastic areas have everything one could want with a great quality of life, whereas
the poor areas are lacking in essential amenities, translating to a lower quality of life.
Poor areas are built to the quality of the poor because those areas dont have enough
money to build things better. Yet advances in technology can allow us to build better
infrastructure less expensively than we can today.

This would make higher-quality amenities more affordable to poorer urban areas,
enabling them to increase their quality of life without necessarily having to spend more
money than they were previously. Essentially, Universal Energy would allow things to
be built to the quality of the fantastic at the cost of the poor. This powers a gentrification
mechanism thats accessible to all income classes, supporting businesses, venues,
attractions and thus jobs allowing any given area of a city to thrive, and in turn

As a result, this affords people a greater sense of community, translating to more

effective community policing and reduced crime, as people enjoy living in higher
quality environments and thus have a greater interest in seeing that quality maintained.
Repeating this approach throughout all areas of a given city, this raises the floor and in
turn enables a city to devote greater amounts of resources to continual advancement
ultimately meeting the criteria to become a supercity.

Supercities are closer to reality than one might think. How so? Our advancement of
cities has been accelerating rapidly over the past 100 years, and tomorrows technology

is only going to accelerate things faster. For example, take a look at the New York City
skyline over the past century, starting from 1914:




In this 100-year period (a blink of an eye in historical terms) we see that the New York
City skyline has grown immensely in both scale and sophistication, and it did so once
again with todays technology, manufacturing and construction limitations.

With the technical breakthroughs weve discussed thus far, we can advance urban
construction at proportionally similar, or more likely at reduced costs. Combined with
the fact that this would enable us to prefabricate and rapidly construct effectively any
type of urban infrastructure, we can grow cities to scales that are not possible today.
The question now becomes: what does the skyline of New York City, or any other, look
in a world powered by Universal Energy, 20, 50, or even 100 years from now? Futuristic
concepts notwithstanding, theres zero reason we cant reach them by walking this path:

These images represent the future this path brings us because this is the path we are
empowered to walk by way of advanced technology. None of this would be outside the realm
of feasibility in a world powered by Universal Energy, nor would be building next-
generation skyscrapers simply by designing them on a computer, pressing the print
button and assembling them on-site, as with prefabricated houses, like life-sized Legos.

This is especially important because humanity is expanding quickly in population,

and according to the United Nations billions more people are expected to flock to cities
within the next few decades. Roughly half of the planet lives in cities today. By 2050,
that number is expected to exceed 70%. These environments must be able to scale in size
to accommodate them. Supercities can do so while also supporting internal energy and
resource production, advanced systems of transportation and rapid construction of
social infrastructure. Using these improvements as building blocks, along with excellent
education and social amenities, the people within these cities would have the world at
their fingertips, powering a strong economy with a perpetual drive to advance upward.


Alongside tools to communicate instantly, transportation technologies rank among the

most transformational achievements of humankind. Up until the invention of the steam
locomotive, the only options for moving people or goods over distance were horses and

sailboats. Today we have cars that can carry us thousands of miles and also aircraft that
can reach anywhere on the planet within in a matter of hours advances we achieved
fewer than 100 years ago.

These achievements have revolutionized our way of life and exposed us to cultures and
places in a way that brought us far closer together than we had ever thought possible.
As transportation is made possible through technology, advances in technology allow
us to in turn evolve transportation to higher levels including levels we once thought
restricted to works of science fiction.

One of the first improvements is already here: the production of vehicles and mass-
transit systems that run on sustainable fuels, which Universal Energy extends through
electricity and hydrogen. But the potential goes higher in a world with unlimited
energy, sustainable resources, advanced manufacturing methods and synthetic
materials that are both lightweight and ultra-strong:

Ultra-efficient, self-driving vehicles. We have been making major headway over the
past few years with autonomous vehicles, vehicles able to drive themselves without any
human interaction. They work through arrays of sensors that instantly relay data (road
direction, location of other vehicles, obstructions and weather conditions) to the
vehicles computer which handles the actual driving and steering.

As this data is processed instantaneously, the vehicle reacts instantaneously as well

much faster than the reaction time of human beings. This consequently makes
autonomous vehicles exceptionally safe, especially since they are programmed to follow
all speed limits and obey all rules of the road.

Currently, the most extensive autonomous vehicle program in the nation is run by
Google, although Tesla, Audi and several other car manufactures are quickly catching
up. Googles program has completed over 700,000 autonomous-driving miles with 12
separate vehicles without a single accident (save one where a vehicle was manually
operated by a person). To compare this safety record with human drivers in the United
States: 254.4 million vehicles annually drive 3.03 trillion miles per year (according to
Wolfram Alpha) and are involved in 10.8 million accidents (as of 2009 census data),
averaging to one accident per every 23.5 vehicles or every 280,555 miles driven.

While Googles sample size is smaller, its current and perfect record is three times better
than the national average. This is especially important because of those 10.8 million
annual accidents, they claim the lives of roughly 33,000 people every year and cause the
injury of 2.36 million others.

While safety is the most important factor when considering the benefits of self-driving
cars, its not the only benefit of note:

As all speed limits and road rules are obeyed, self-driving cars are highly efficient
as they can maintain a uniform speed without having to constantly accelerate or
decelerate in reaction to other vehicles (assuming all others on the road are also
autonomous). This makes self-driving vehicles more energy efficient and allows
them to help relieve traffic congestion.

Of the 33,000 road fatalities every year, nearly a third of them come from drunk
drivers and presumably the same is true of the 2.6 million injuries that occur
annually as well. Self-driving cars make this problem go away effectively
overnight. While legislators might be hesitant to allow intoxicated drivers to rely
on self-driving cars to get home safely (especially since they profit from DUI
penalties), considering the near-perfect safety record of autonomous vehicles
presently, their benefits outweigh the risks. And for the record: the software that
drives autonomous cars is functionally identical to the autopilot software that has
been used to fly and land commercial jets for years with a nigh perfect safety
track record.

This is the state of current technology. If Universal Energys advancements are applied,
our capabilities increase accordingly. For instance, instead of having self-driving
vehicles track road surfaces via just internal sensors, they can also use the built-in Wi-Fi
or other guidance mechanisms of solar roads to navigate, providing system redundancy
and security.

Wireless capabilities of autonomous vehicles could also be scaled to greater extents

within vehicles themselves: music, navigational data, hands-free communication, etc.,
as well as the instant notification of emergency crews in the (unlikelier) event of an
accident, such as the extent of damage and the number of passengers injured. They are
essentially the vehicles of science fiction past with most of the bells and whistles, with
one exception: the ability to fly.

Yet it turns out we can have that as well.


The price of passenger vehicles has steadily dropped over time and are affordable to
most Americans with steady employment. The airplane, however, remains
unaffordable to the majority of people even though it has been around nearly as long.

The reasons for this make sense: airplanes are more difficult to operate than passenger
vehicles and are thus more dangerous, the mechanical tolerances for aerospace are
more stringent than for land-based motor vehicles and the demand for small aircraft is
substantially less. With all of these combined together, it keeps the price of aircraft

But technology has provided new approaches to light aircraft, which we have so far
seen applied to drones in the form of quadcopters.

As opposed to helicopters that use two or four blades on a single rotor, quadcopters
have four rotors on opposite and balanced points, thus removing the need for a
stabilizing tail rotor. This makes quadcopters extremely balanced, which in addition to
featuring the capability to hover, ascend and descend vertically, also makes them both
more maneuverable and much easier to fly than traditional aircraft.

As their designs are simpler than helicopters, fully functional models can be built with
3D printers, minus a few core components (modular motors). Most quadcopters are
currently drone-sized as they are powered via an electric motor as opposed to gas
motors, a requirement since all rotors require instant torque for the quadcopter to fly
effectively. Yet todays batteries are too heavy to power large aircraft. That calls for an

ultra-light battery with high storage capacity which is exactly what we have with

As we saw previously, graphene is an ultra-strong, ultra-light and ultra-conductive

material that Universal Energy can cost-effectively synthesize. With it, not only can we
store the requisite energy to power a large quadcopter with minimal added weight, we
can actually integrate the storage medium into the fuselage while ensuring uniformly
high strength. This would make quadcopters light enough to increase in size to
transport people and materials.

The concept of a quadcopter large enough to carry people has already been proven to
work (as evidenced by the real-life pictures above of the Ehang 184 personal
quadcopter). From there, the next obstacle to building a personal or industrial
quadcopter, besides weight and energy storage, is an engineering design that makes the
quadcopter efficient and cost effective enough to work for transportation ideally in a
way that can drive on the ground as well. To that end, the following images are of a
remote-controlled quadcopter mini-vehicle that is fully functional and exists today:

Of these images, only the bottom is conceptual, with the rest real-world. This vehicle
has proven to both drive over off-road terrain and navigate sophisticated obstacles
through steady flight. At approximately two feet long, it is made with polycarbonate
materials (the same as used for bulletproof glass) that can well withstand the force of a
crash or a rapid vertical descent.

The only obstacles to making a model large enough to transport humans or heavy cargo
are 1) ultra-strong, lightweight materials at an acceptable cost, 2) lightweight energy
storage mechanisms and 3) manufacturing ability all obstacles we can solve through
graphene batteries and improved 3D printing/prefabrication with advanced synthetic
materials. If we can prefabricate an airplane, LFTR, road surface, bridge or skyscraper,
making a personal quadcopter is well within our capabilities.

With the time-tested code that powers the
flight systems of the vehicle, flying it is as
simple as driving a car. Large quadcopters
today (50+ lbs.) are already flown long
distances and make precise maneuvers using a
flight control system consisting of two
joysticks and a first-person video screen far
less than the array of functions necessary to fly
a helicopter or airplane. This would be no

A variant large enough to transport humans

or cargo would likely have more options and
safety features, but those requirements are by no means obstructions. We are only one
technological step away from having personal flying vehicles that are safe, strong and
easily flown. Vehicles that can also fly between destinations via autopilot programs,
which as stated before, already fly and land commercial aircraft today. While certain
automobile drivers today might not demonstrate the requisite trust to pilot flying
vehicles, that is a concern that can be addressed through more lengthy training and
more stringent licensing requirements.

Its difficult to overstate how much this could improve how we live and move,
especially since distance between destinations as the crow flies is always shorter than
meandering roads. This can dramatically decrease the response time of emergency
crews, help deliver aid rapidly to areas without road access and in the case of drone-
sized quadcopters, even deliver consumer goods.


Maglev rail. The state of American rail is poor compared to the rest of the world, and
many of the latest attempts to revive it reflect the typical competence of todays
government, which to be kind we will call insufficient. Additionally, several members
of Congress have been convinced by the petroleum lobby that Americans have no
desire to see oil-free electric trains transporting people and goods in large numbers,
instead preferring a transportation standard of gasoline-consuming personal vehicles
(see: legal bribery).

These are other contributing reasons as to why we have not embraced the revolutions in
rail technology that other countries have enjoyed, but now its time for us to catch up.
The most advanced rail technology today is maglev, short for magnetic levitation,
which uses electromagnets to levitate and propel a train on a track, shown as follows:

As maglev propulsion is frictionless, this allows maglev trains to travel at high speeds,
exceeding 300mph (482 kph) in some cases. This technology exists today, but it does so
with todays energy and manufacturing prices factors that would shrink with
Universal Energy. With modular and prefabricated train cars and track systems, we can
build trains next to solar roads to have an insulated mass transit system with constant
connectivity to power sources. And if these trains are built on prefabricated pylons next
to roads, it would also remove the need to purchase additional land for their
construction, further reducing overall costs.

Assuming we are ultimately successful in showing corrupt politicians the door, large-
scale high-speed rail becomes an immediate possibility, which not only provides for the
transportation of people, but also for the transportation of consumer goods.

Presently, goods need to be shipped by either vehicle, rail or aircraft, and rail is by far
the most efficient of any method. Providing improved nationwide rail networks that
can transport goods on the order of 300 miles per hour is a four-fold improvement over
most domestic rail technology. This would lower costs and delivery times, returning
immeasurable economic benefits.

Yet even though that speed is impressive for a train, maglev technology can
theoretically propel trains much faster. But the biggest obstacle to doing so is air
resistance, which at high speeds is the most important consideration for safe operation.
To exceed those speeds, well need something that extends megatunnel technology as
described previously, coming to something called the Hyperloop.

The Hyperloop. Originally envisioned by PayPal, Space X and Tesla founder Elon
Musk, the Hyperloop is a theoretical extension of the tubular transport systems used in
banks, where a capsule rides a wave of air inside a tube from one location to another.
With the Hyperloop, instead of transporting a capsule it would instead transport a
train, integrating maglev technology for high-speed travel.

As with underwater megatunnels, the Hyperloop would operate partially

depressurized as opposed to a vacuum, recognizing the inherent problems with
maintaining a sealed vacuum environment over long distances. Reduced pressure
translates to reduced resistance, which would permit the Hyperloop to travel at high
speeds. Just how high is not fully certain. When an object is going really fast inside a

tube, the primary engineering limitation is friction, of which an air cushion would

How Elon Musk and Space X propose to provide that cushion is also the same way they
propose reducing air pressure to allow the train to travel at high speeds: mount a high-
strength air compressor at the front of the train to push air around the vessel to the rear.
This would remove forward-facing air resistance and at the same time would provide a
frictionless air cushion around the train body. In theory, this could allow the Hyperloop
to travel at thousands of miles per hour.

The concept is envisioned to be prefabricated and built on pylons by design, reducing

the need to purchase land for tracks and thus overall construction costs. Integrating
solar roads or solar-enabled water transportation pipelines would also provide constant
power for air compression systems and the system as a whole, which if built with
advanced synthetics would be stronger and lighter than most materials commercially
available today. This would further increase the potential of maglev rail and its
corresponding social benefits, adding yet another revolutionary advancement to human
travel. Indeed, the Hyperloop has already started construction and has demonstrated
initial successes in early tests.

These areas of transportation, like many of the possibilities opened by Universal

Energy, are simply the beginning. We can extend these advancements to effectively
anything we imagine, from our day-to-day lives, to cutting-edge aerospace that can
revolutionize not just travel within our world, but well beyond it.

This is the nature of advanced infrastructure as this writing refers to it, and the
realization of the future that it brings is now at our command. When I say at our
command, I mean that quite literally. Nothing discussed within this writing is
unfeasible, and none of it is exaggeration. We can have this, we can have all of this, if
we devote a unified effort to reaching it. The question is not of engineering feasibility,
as we know this is feasible. The question is now that of will, the will to build the world
we have always wanted but never before could have until we made the commitment to
reach it together, to see our destiny fulfilled, our pedigree renewed and our promises


Humanity needs something brighter for our world and our future. A world without
resource scarcity and senseless, purposeless suffering, a world spared of the petty
conflicts that we have consumed ourselves with for millennia. People deserve to wake
up in the morning and not see another horror show, to go to bed each night and feel
legitimately hopeful for the days ahead. We deserve to strive for higher aspirations, for
us to embrace our full potential and reach a harmonious plane of existence with our
planet, with our environment and most of all with each other.

These statements are not and must not be clichs, brushed off as flowery language or
political lip service. They are core perspectives. If there is any meaning to life, if life is
precious and worth nurturing, worth empowering and worth saving, then there is no
greater goal we should have for ourselves. There should be nothing more important
that we would see achieved. This is a dream thats been held for time eternal, and its
realization has been a long time coming. It exists in contrast to its polar converse, a
future rife with destruction, misery and conflict a future that must never come to

The dichotomy of these two futures, one of our success, the other of our failure, has
haunted our dreams for many years. That is why this writing exists, to spark a
discussion on how to construct a framework that can build for us a future worthy of
who we are and who we wish to be. Made possible only by the efforts of those who
have sacrificed much who have sacrificed everything to get us to where we are now,
this framework and the future it can build is now within our grasp.

I believe that we are finally ready to reach it, and to be made kinder, stronger and more
brilliant for it. To finally walk a path of actual, tangible progress and illuminate our
way through a united strength of purpose. That is the vision this writing shares here
and seeks to share henceforth, yet it is only an option for something better simply an
alternative to our current state of affairs.

It is an option that requests choice, and it now requests yours.

But choosing this option must also come with something that technology can never
provide: your focus and your effort. This future will only be realized if we choose to
make it a reality, thus this future needs advocates and it needs allies this future needs
you. It needs you and all of us to stand for it because it cannot stand for itself, and it
will only be made real by the collective result of our efforts as individual people.

To be certain, this is an effort that comes with challenges. There are interests that would
seek to deny us this goal, and oppose any action towards building a better world out of
a lust for power and greed. But today we have the tools to simply change the rules of
the game they once dominated, today we no longer need their permission to evolve our
circumstances to a state that sees us collectively thrive. With them, we can build a
future that we truly deserve, one that at last brings hope to a world it has long-since
forsaken a future that has thus far been denied to every prior generation of our

No longer should it be denied to ours.

Look again at that dot. That's here. That's home. That's us. On it everyone you love, everyone
you know, everyone you ever heard of, every human being who ever was, lived out their lives.
The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic
doctrines, every hunter and forager, every hero and coward, every creator and destroyer of
civilization, every king and peasant, every young couple in love, every mother and father,
hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every
"superstar," every "supreme leader," every saint and sinner in the history of our species lived
there -- on a mote of dust suspended in a sunbeam.

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all
those generals and emperors so that, in glory and triumph, they could become the momentary
masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one
corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent
their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our
posturings, our imagined self-importance, the delusion that we have some privileged position in
the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great
enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come
from elsewhere to save us from ourselves.

The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near
future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the
moment the Earth is where we make our stand. It has been said that astronomy is a humbling
and character-building experience. There is perhaps no better demonstration of the folly of
human conceits than this distant image of our tiny world. To me, it underscores our
responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot,
the only home we've ever known.

- Carl Sagan

Chapter Eleven: A Future Worth Having

The purpose of this writing is to provoke thought and the consideration of ideas. It's
meant to be a conduit for a conversation among ourselves on how we might solve the
problems of our time, and build a better world. The ideas it proposes in furtherance of
that end are numerous, and they pull few punches. But that is because they will work.
They will work because their purpose is to work. Their purpose isnt to sell a hidden
agenda, their purpose is to sell a transparent agenda: how we can make tomorrow
brighter than today.

Universal Energy is a vehicle that can actually get us there, replacing the afflictions of
our world with a new framework that removes the limitations and constraints of the
way our civilization currently works. In its existence, I suppose you might wonder why
I took it upon myself to cultivate these ideas and make this writing a reality. As youve
honored me with your time to read it, its not a question I can deny to you in good faith.

So, whats my kick?

Ive asked myself that question a thousand times, and every answer that flashes in my
mind all has some degree of truth to it. Because we can. Because we must. Because we
owe it to those who came before us. Because we owe it to those who will come after.
Because theres far too much suffering in our world and far too little hope to fix it.
Because I know we can have something better. Because I know we deserve something

But above all else, the answer is the reality of our potential our true potential
juxtaposed with something called The Great Filter.

To help explain what that is and why its important, its necessary to first make mention
of the Fermi Paradox, which is a question that has stumped scientists for decades.
Succinctly defined, that question is:

How can our universe, in all its unimaginable vastness, present such an immense likelihood for
sentient life, yet at the same time we cant seem to find it?

Ill phrase this another way to help clarify:

Our planet, Earth, orbits our Sun along with seven other planets, comprising our solar
system. It is only one out of roughly 300 billion solar systems in our galaxy, the Milky
Way. Our galaxy, itself, is only one out of an estimated 2 trillion galaxies in our
observable universe by the last known estimate. In another way of saying, if we were to
take every person alive today and send them each to a unique galaxy, wed only be able
to visit 0.37% of them. As each galaxy has hundreds of billions of stars and there are
trillions of galaxies, it makes the odds of Earth being the only planet to support life in
the universe to be nigh impossibly low.

Presently, scientists estimate that our universe contains at least 100 octillion stars (thats
27 zeros). If even only one out of one million stars had orbiting planets that sustained
life, that would still leave 100 septillion planets that did. Thats 100 billion individual
groups of one trillion planets, with 100,000 planets in our galaxy alone for the record.

Think about that for a second.

Its so impossibly unlikely that were alone, yet at the same time, we havent heard from
another lifeform beyond our planet not so much as a peep. That is the Fermi paradox,
in a nutshell. But within that paradox are theories as to why this is the case, which is
where the notion of The Great Filter comes in.

The Great Filter is a theory that all life faces a threshold that it must cross in order for
it to ascend beyond its planet and survive for the long term, and in order to truly
advance as a species it must overcome a series of obstacles that would otherwise stop its
ascent or bring about its destruction.

Consider it another way, if you will: if all of Earths history were combined into one
365-day year, humanity emerged at 11:55PM on New Years Eve, and we only reached
the modern era at about 10 seconds to midnight. By 7 seconds to midnight, we had
created the means to cause our own extinction. By 5 seconds to midnight, we will have
well run out of the resources that sustains our rapidly expanding population. And if
this problem remains unsolved, it will destroy us before the clock strikes 12.

That is the great filter. It is something that we are facing right now, today.
And it is our generation, our time, that is tasked with passing it.

Presently, short of embracing a transformational initiative on the scale of the

frameworks proposed by this writing, the test that the great filter lays before us is a test
that we are posed to fail. If we fail, then within our lifetimes, we will see what happens

when short-sighted politicians execute short-sighted strategies in a world where there
are five times as many nuclear weapons as there are cities.

Thus as a people we are left alone to pass through this gauntlet of unforgiving truth: we
will either take the next step as a civilization and become resource-independent, or the
human race will run out of critical resources and fight over ever-dwindling supplies
until there is nobody left to fight. That is the stone-cold reality of our present state of

For that reason, this project was started. That's not an acceptable end to our story. I
think all of us agree on that, and are looking for the right way forward to improve our
present and future circumstances. That's why everything within this writing describes
technologies that can be deployed on a large scale, and also why everything is open
source. It's a proposal of ideas that can be adopted, modified, improved upon or
extended by collective action should people wish to do so a proposal that's now in
your hands.

But just as importantly, this project was started because it should not need to exist.
Because the problems it seeks to solve shouldnt exist. They belong in history books, not
newspapers. The coming resource crisis should not be coming, nor should it be a crisis
this problem should have been seen and solved long ago. We should not be publicly
debating whether its in our long-term best interests to actually invest in our long-term
best interests. Our society should not be facing increasing stagnation in a culture thats
despondent to the decay of the great things we once built and held dear.

Worst of all, the horrors of our time should not be in our time. We should not be seeing
things like resource conflict, mass social unrest, failed states and the inhumane
atrocities that come with them, the endless parade of violence and destruction that has
continually held us back as a species. These horrors disgrace our pedigree, and they
tarnish our collective soul. They deny us our true potential, what we are capable of
when we can work together and set aside our petty differences, and stop playing
childish, selfish games with hatred in our hearts.

It is just so ridiculous, so mind-numbingly pathetic that we still have to put up with this
kind of insanity in the modern era. That 45 years after we put a man on the moon, were
still fighting over resources. That in the same decade we land a rover on Mars, militant
groups are recreating the dark ages on three continents. That billions of people live in
squalor and that millions of people die every year from famine and water-borne
illnesses. That 1 in 5 American children are impoverished and our bridges are

collapsing as our government is sold to the highest bidder. That every generation born
after WWII has lived with the possibility of nuclear extinction as a fact of life.

This situation isnt the way it is. This situation is madness! All the more so when
compared with the luxuries we all take for granted in a first-world society and how
immune we foolishly feel from the world's afflictions. And thats the thing. We arent
immune we are far from it. With the way things are unfolding, the next 10-40 years
are going to be a very interesting ride for the human race, and history has shown a prior
track record that, to put it mildly, does not encourage optimism.

Yet this problem and all that surrounds it defines our time only because it exists in such
contrast to the things we have accomplished by embracing a higher path. The charity,
benevolence, selflessness and compassion that has stood up to the negative aspects of
our nature are the things that have gotten us this far, and are the things that will carry
us farther. As creatures, we have an intelligence that our natures have not yet caught up
to. We are capable of limitless brilliance, yet are still held back by the primitive aspects
of our predecessors. We are brilliant, hateful primates with giant hearts and arsenals of
nuclear weapons, and as people we uniquely have a choice on which parts of our nature
we value and thus feed.

That is what makes us special, and that is why the cynics are wrong when they suggest
were not worth saving. Because we can choose for those better natures, if thats what
we invest in.

But at the same time, thats something on us to prove. We know this in many ways, yet
weve been unable to do so because we dont really know how to even identify the
causes of the problems of our time, let alone stop them let alone get ourselves to the
doorstep of something objectively better. And as change does not come, the problem
does not get solved, and as it does not get solved it gets worse toward ends that cannot
be accepted by any society interested in its continuance.

The harshness of this reality makes objectivity an uncomfortable prospect. Yet viewing
it through an objective lens drives a critical epiphany, one that does not fit the
convenient political narratives of our world today. And that is this: malice, as we know
it, is largely a lie. We can certainly be its executor, as proven by the endless instances of
cruelty and violence people have committed over the ages, but the willful adoption of
malice on a large scale is extremely rare among human beings. We see its frequent
results not through mindful embrace, but rather through the foothold it carves in the
mind in the form of scarcity-driven need.

One should never attribute to malice that which can be attributed to need. In this
analysis with that filter applied, you realize that most people on the large scale, even
those who exhibit ugliness in times of strife, are not wicked. Most people are good but
they are only as good as their resources allow them to be. And for every horrible action,
there are incredible actions of people doing amazing and inspiring things, who give us
so much benefit even if we dont realize it in our day-to-day lives.

This realization frames the true nature of our circumstances as a dichotomy, one that
fundamentally echoes the struggle between good and evil. But good versus evil is the
wrong way to look at it because those are not the true forces we are working within.
They never were as that perspective is distorted, based upon reductive theological
duality. The correct lens to view the age-old human struggle is not good vs. evil, its
supply vs. need. That is the true dichotomy of the human condition. And the conflict that
stems from need is the enemy that we now face.

At the end of the day, the root of our hardships, our struggles and the horrors that have
comprised our history, the fault of nearly all of it rests at the feet of need. That has been
the paradigm the entire human condition has developed in, and no matter the horrors
or the struggles, we have overcame them. We have accomplished goals that were once
deemed impossible, and we are so, so close to taking the next technological leap.

These accomplishments were made possible by the sacrifices of millions, sacrifices that
words will never adequately describe, each one a story of incomprehensible personal
fortitude, and each just a drop in the bucket alongside the millions like them. For
200,000 years, people just like us gave everything they had for our future, for us to have
the opportunity that they did not have to make things permanently better, an
opportunity that we now hold in our hands.

Simply stated, I cant imagine anything worse than failing them, to not carry the torch
they gave us toward the victory they could not reach the victory that we uniquely can.
To me there is nothing more important, and Im tired of being encouraged to ignore
that. Im tired of being encouraged to ignore the reality of our world and our potential
to change it. Im tired of being told to find meaning in societys dog and pony shows:
celebrity news, celebrated complacency and fleeting materialism, the professional
wrestling matches of todays bribed political dynamic all washed down with diet soda
and light beer.

I have one life to live, and it will not be for that. And I believe the same is true for you.

The eternal problem of human civilization is resource scarcity and the economic
damage caused as a result, which is in turn the core cause of conflict. Thus if you can
build a system that solves resource scarcity, it solves resource conflict, which thus
solves the zero-sum game, which thus solves every social problem tied to resources and
economy. So the idea is to deploy the best energy technologies in the world based on
the requirements of what it would take to functionally end resource scarcity, and plug
in the pieces.

This came to a modular energy framework that could power other modular
frameworks, frameworks that address social problems not by mitigating their
symptoms, but by curing their cause. We create a method that can end the concept of
need; that can indefinitely produce every core requirement of humanity, and that can
allow us to devote the resources we once spent on need toward our continual social

That is how we defeat the monster. That is how we win.

In this effort, I have no doubt that cynics will say this aim is impossible, that its too big,
too radical or too ambitious to work. And maybe they are right. Maybe we cant do this.
Maybe this problem is too big to solve, or worse, maybe we are inexorably doomed to

I am not here to judge those who would speak to that effect. But I am here to say that
they are wrong.

Need has brought us a pestilence of misery and conflict, and the concept of need must
be permanently retired. We now have the means to accomplish that and we can choose
to engage them if we choose to. We, all of us, are faced with this choice. And as one of us,
I made a promise to devote my best efforts toward proposing an actually effective way
to reach that end, something that could be given away to anyone who wished to adopt
these ideas and begin discussing how we can work together to make them real.

Nobody asked me, paid me or qualified me to make that promise. I did this on my own,
and I made that promise, to myself, and to you, to see this task fulfilled because I chose
to. I made this promise to you because I dont answer to the cynics and the apologists of
the status quo I answer to you. I answer to you because we are all in this together, and
I truly, sincerely believe in our shared capabilities, in the potential that we have if we
can set aside our differences and work together. And not only is that statement not
clich, I am miserably miserably sick and tired of hearing that it is, or that any

movement in furtherance of human betterment is somehow grounds for ridicule. Its
not. It should be our top priority, and in a sane world, it would be.

We see the problems in our society, and we feel powerless to do anything against them.
Well, I am here to tell you that there is something we can do. That we have an actual
chance to fix these problems, and fix them well, that we can one day wake up in a world
that is spared of the afflictions that plague it.

Weve been looking for this chance for a long time, weve been looking for something
better for a long time, and I believe this is the best chance we have.

So Ill ask you directly: do you want this? Do you want these ideas implemented?

If you do not, then I have failed, and I am sorry that I wasted your time.

But if you do, if you believe in the future that I believe in, the only thing that stands in
the way of making it a reality is your willingness to support it. If you do, then so will
others, and as do others it will empower others to do so as well for as iron sharpens
iron, so one person sharpens another.

Why did I pen this writing? Because I believe in a better world, and I choose to fight for
it. Because you know what this fight is for, what were fighting with and the stakes that
we all face, I believe that you will fight for it as well. And that is what I ask of you now.

I will ask you to fight for this, if you believe in it. I will ask that you make these ideas
yours and make them important, to share and discuss them as you deem fit. To take
stake in political affairs and to make informed choices when voting on them, to be
inspired to make things better because you can make things better.

It might be difficult to realize in the face of present circumstances, but although our
future is not yet written it is still ours. It belongs to us as we have the ability to shape it
and steer it toward whatever direction we choose. This ability comes to us in the most
unique of ways, for never before has humanity faced such risks to its existence and
never before has every individual person had such power to change tomorrow through
their actions today.

As is appropriate for the stakes that we face, we rightfully all have the ability to choose
for ourselves. And should we choose for this, we once again would be the model for the

rest of the world to follow allowing us to supercharge the global economy and
address lasting global problems.

Eventually, over time, as technology expands and satisfies more and more needs
through indefinite resource production, conflicts will reduce, economies will grow, as
will relationships and trade agreements. International relations will become more
transparent and straightforward, and the United Nations will be a venue of greater
effectiveness. Development and modernization will begin in regions that were once
war-torn, and the echoes of human conflict will fade into memory like all other plights
of our nature that technology has allowed us to banish into the past.

From there, as technology greater connects us and brings us closer together, exploration
beyond Earth will become more and more sophisticated and we will discover what
there is to discover in the vastness beyond our planet. We will reach not just the next
tier of civilization, but also a critical realization, that we are not just members of
individual countries, as that isnt the label that should define us. We are all human
beings, we are all people that is the label that should define us. That is because we all
share this rock in space together, and whether we live on it together or die on it
together, one way or the other, ultimately, it will be so together.

It is my greatest hope, whether in success of these goals or not, that we can be able to
realize that one day. We place boundless faith in gods we cannot see. Perhaps we could
strive to see the day where we might place faith in each other.

So I will start by placing my faith in you.

I have faith that you may choose to look at things differently, to expect more from your
government, to support the areas of society that make us better as a people. I have faith
that you might choose to place less emphasis on social distractions, knowing that there
is more to life than them, and that as you have one life to live, that you would live it
honorably and fully, to be happy and to make the world a better place. I have faith that
you will stand and fight for a better world because I have faith in you to stand and fight
for a better world, because that is exactly what we need to have happen right now. And
if enough of us stand for that goal, then it will be realized.

That is where I choose to place my faith. If you would do so as well, let us work in
harmony to build something greater, a future worth having. To that end, and for that
end, may we get there together. As one people, for all people a people united.

If we long for our planet to be important, there is something we can do about it. We make our
world significant by the courage of our questions and by the depth of our answers. We embarked
on our cosmic voyage with a question first framed in the childhood of our species and in each
generation asked anew with undiminished wonder: what are the stars?

Exploration is in our nature. We began as wanderers and are wanderers still. We have lingered
long enough on the shores of the cosmic ocean. We are ready at long last to set sail for the stars.

- Carl Sagan


Universal Energy Cost Estimate

This page covers the cost estimate for Universal Energy as implemented to the criteria
specified in Chapter One, centering on its electricity generation and resource production
aspects and the estimated expenses therein. Please note that as this estimate requires a
degree of speculation and assumption, figures may be slight approximations and may
round to the nearest decimal point.

Additionally, please also keep in mind that Universal Energy is modular by design. It can
be implemented to any scale, smaller or larger than is suggested here. This approach is a
default implementation strategy that can make our nation energy independent and
facilitate the synthesis of electricity, fresh water and fuel to large scales. As commercial
resources, building materials and food are intended to be provided by the public sector in
this model, building off the energy and resource benefits provided by Universal Energy's
public resources.

This estimate is broken into three areas: electricity generation, resource production and
general notes explaining why this estimate is intentionally higher than it would likely be as
proposed here.


The United States currently consumes 3.76 trillion kilowatt-hours of electricity annually.
The initial goal of Universal Energy is to provide 300% of our electricity consumption,
which would be 11.29 trillion kilowatt-hours. If we were to leave our current capacity
intact (and gradually phase out old power systems, starting with the dirtiest first),
Universal Energy would initially need to generate 7.52 trillion kilowatt-hours of electricity.

The suggested allocation of technologies to generate this electricity heavily favor Liquid
Fluoride Thorium Reactors, as they produce the most energy per dollar by far. However,
with the exception of using waste heat to desalinate seawater and produce synthetic
hydrogen in Energy Plants, LFTRs are a one-trick pony. They don't deliver water, and they
don't help advance our road networks or underwrite a smart, resilient electric grid. For this
reason, while LFTRs make up the backbone of the 'electricity target,' substantial investment
is made into solar roads and National Aqueduct infrastructure to round out Universal
Energy's electricity generating capabilities.

In this example model of implementation, LFTR's comprise 90% of electricity generation,
with solar roads and National Aqueduct systems comprising roughly 5% each.

The model's estimated cost breakdown is as follows:

Liquid Fluoride Thorium Reactors:

According to Robert Hargraves, author of Thorium, Energy Cheaper than Coal and a foremost
expert on thorium energy, the pre-learning ratio cost of a 100 megawatt reactor is estimated
to cost $200 million. To account for any efficiency losses, we'll assume LFTRs have an
uptime of 80%. That would mean a 100 megawatt LFTR would output 80 megawatt-hours
of electricity per hour, 1,920 megawatt-hours per day and 700,800 megawatt-hours per year.
Extrapolated into kilowatt-hours, that comes to 700.8 million kilowatt-hours generated

Divided by $200 million, that gives us a power generating ratio of 3.504 kilowatt-hours per
dollar. 90% of our electricity target of 7.52 trillion kilowatt-hours is 6.77 trillion kilowatt-
hours. At 3.504 kilowatt-hours per dollar, that comes to a total cost of $1.93 trillion to
generate 90% of Universal Energy's target.

Estimated total cost for Liquid Fluoride Thorium Reactors: $1.93 trillion for an annual
output of 6.77 trillion kilowatt-hours.

Solar roads:

If we recall back from Chapter 3, Solar Roadways panels annually generate 22.2 kilowatt-
hours of energy per square foot, at a cost of $114 per square foot. In the case of Wattway by
Colas, we determined that the panels annually generated 19.22 kilowatt-hours per square
foot, at a cost of $54 per square foot.

If we were to split the difference between these two prototypes, that would come to 20.71
kilowatt-hours generated per square foot, at a cost of $84 per square foot (223 kilowatt-
hours generated per square meter, at a cost of $904 per square meter).

This model's price estimate for solar roads is directly relational to the estimated cost of
repairing our roads nationwide. According to the U.S. Department of Transportation, there
are roughly 2.68 million miles of paved road surface in the U.S. According to the American
Road & Transportation Builders Association, between 16-26% of paved road surface is in
disrepair with higher degrees of disrepair in urban areas as they experience more vehicle
traffic. We'll split the difference and use 21% for our estimate, which would come to 562,380

miles in total. The American Road & Transportation Builders Association further assesses
that milling and resurfacing a 4-lane road costs about $1.25 million per mile. Across 562,380
miles, that comes to a total figure of $703 billion dollars that we could use to pay for solar
road panels. This model will double that figure, coming to $1.406 trillion to devote to solar
road panels.

At an estimated cost of $84 per square foot (splitting Solar Roadways and Wattway), that
would buy 16.74 billion square feet of solar road panels that would primarily be deployed
in urban environments. At 21.71 kilowatt-hours generated per square foot, that comes to
363 billion kilowatt-hours generated annually.

Estimated total cost of solar road panels: $1.406 trillion, annually generating 363 billion

National Aqueduct:

The National Aqueduct's electricity generation is comprised of three functions: internal

turbines within pipelines, solar panels on top of pipeline arrays and hot water inside
pipelines that itself has high potential energy that can be extracted through thermoelectric
functions. As this system as envisioned is hypothetical and Lucid Energy (the company that
makes in-pipeline turbines) does not publicly release pricing models, we'll need to use a
currently existing system as a starting point to make a cost estimate.

In doing so, we'll assume that the non-solar aspects of the pipeline would cost similar to the
largest oil pipelines today. According to the Oil and Gas Journal, oil pipelines cost an
average of $6.5 million per mile to construct.

This cost basis is broken down into four categories:

Material-$894,139/mile. (13.62%)

Labor-$2,781,619/mile. (42.36%)

Miscellaneous-$2,547,600/mile.* (38.79%)

ROW (Right of Way) and damages-$343,850/mile. (5.24%)

* 'Miscellaneous' is defined as "Surveying, engineering, supervision, administration and
overhead, regulatory filing fees, allowances for funds used during construction," which
we'll presume includes land purchases alongside right-of-way (ROW) expenses.

With these costs in mind, we'll be making a few assumptions mindful of the fact that
National Aqueduct pipelines would be factory prefabricated, land wouldn't need to be
purchased (as pipelines would be installed on publicly owned roads or under high voltage
power lines) and regulatory approval would be streamlined as the Public Interest Company
would be a public service and wouldn't need to pay additional costs for regulatory
approval. Cognizant of this, we'll assume:

That materials for the National Aqueduct will cost three times higher than for oil
pipelines, as pipelines would be modular and include in-pipeline turbines +
thermocouples. That translates to $2.68 million/mile for material costs. This figure
does not include the cost of solar panels.

That labor for the National Aqueduct will cost half of oil pipelines as they'll be
factory prefabricated, coming to $1.39 million/mile.

That miscellaneous costs would be 75% lower than oil pipelines for the reasons
listed above, coming to $636,900/mile.

That Right of Way/Damages would reduce 75% as well as the government wouldn't
need to make right-of-way costs and factory prefabrication would dramatically
reduced damaged units compared to ad-hoc construction. This would come to

Combined, this provides an assumed hypothetical cost estimate of $4.8 million/mile to

construct National Aqueduct pipelines before solar panels are added.

To estimate the cost of adding solar panels, we'll use Astronergy's 315-watt panel as a
starting point for a cost estimate. Their 315 watt panel costs $300 and has a surface area of
20.1 square feet. That comes to roughly 15 watts per square foot at the cost of roughly $15
per square foot. (Note: it may be helpful to review the images surrounding the National
Aqueduct to gauge their conceptual implementation.)

As the National Aqueduct pipeline arrays in this example would have an estimated surface
depth of 84 inches (estimating each 24" inside diameter pipe is 27" wide, plus spacing), that
translates to 7 feet. Across a mile-long length, that's 36,960 square feet (3,433 square meters).

Therefore, to cover one mile of nine-pipe National Aqueduct arrays with solar panels, at
$15 per square foot, it would cost $554,400.

All combined, this brings us to an assumed estimate of $5.34 million per mile to construct
National Aqueduct pipeline arrays.

With that established, let's determine how many miles of pipeline arrays we need.

For initial deployment the National Aqueduct is proposed to be capable of providing

up to 20% of our national annual water consumption.

The U.S. consumes a total of 2,842 cubic meters of water per-person, per year, coming
to 920.8 billion cubic meters (or 243.25 trillion gallons) total across a society of 324
million people. On a per-day basis, that comes to 667 billion gallons. 20% of that is
133 billion gallons.

For initial deployment we will estimate that the National Aqueduct will store 300
billion gallons of water at any given moment in time, 60% (180 billion gallons) of
which is stored in pipeline arrays with the rest in storage tanks.

Based on these figures we'll start first with cost, and then shift to calculating output.

Cost of Pipelines:

The volume of a 24" pipe is 23.5 gallons for every one foot of pipe, which translates to
124,080 gallons for every mile of pipeline, or 1.11 million gallons for an array of nine (2,626
cubic meters per kilometer for an array of nine). If 180 billion gallons are stored in pipelines,
that would require us to have 161,186 miles (259,404 km) of pipeline arrays.

At a cost of $5.34 million per mile, that would cost $860 billion.

Cost of Storage: although National Aqueduct storage tanks would differ from commercial
storage tanks today, current estimates for water tanks in high-stress areas (from the State of
Michigan) come to $2.015 million for a 50' tall steel water tank with a capacity of 2 million
gallons, so roughly $1 per gallon. As 60% of the 300 billion gallons within the National
Aqueduct would be within pipeline arrays, the remaining water placed in storage would be
120 billion gallons. At $1 per gallon, that comes to $120 billion.

Control: As the National Aqueduct does not conceptually exist outside of this writing,
effectively determining what it would cost to build the control component is prohibitively
difficult. As such, we'll assume the cost of the control system and infrastructure would be
$30 billion.

Subtotal cost for National Aqueduct: $1.01 trillion.

With that established, we'll shift towards potential electricity generation.

Electricity due to internal water flow: according to Lucid Energy, the inventor of in-
pipeline turbines, a 24" pipe generates 18 kilowatts of power with a flow rate of 24 million
gallons per day (MGD). At 18 kilowatt-hours per hour, over a 24-hour day that comes to
423 kilowatt-hours generated per day, 390 kilowatt-hours with an assumed efficiency loss
of 10%. Over a calendar year, that's 141,912 kilowatt-hours annually generated per 24
million gallons of water flow. Extrapolated to the million gallon per day level, that's 5,913
kilowatt-hours annually generated per million gallons of water flow.

Assuming the National Aqueduct transports 133 billion gallons of water per day, that's
133,000 million gallons of water flow, which would generate 786.4 million kilowatt-hours

Electricity due to solar panels: assuming we deploy 161,186 miles (259,404 km) of National
Aqueduct pipeline arrays, and each mile of pipeline array has 36,960 square feet (3,433
square meters) of surface area, that's 5.96 billion square feet (553.5 million square meters) of
solar-enabled surface area.

As each panel has an output of 15 watts per square foot and assuming a nationwide
average of 5 peak sun hours per day (which incorporates any efficiency losses), that's a total
daily output of 75 watt-hours per square foot, which comes to 27.38 kilowatt-hours
generated per square-foot annually. Across the entire Aqueduct, 5.96 billion square feet
generating 27.38 kilowatt hours annually comes to 163.1 billion kilowatt-hours annually
generated from pipeline-mounted solar panels.

Electricity due to hot water inside pipelines:

To assess the potential energy in the hot water inside pipeline arrays and storage tanks,
we'll base our calculations on the following assumptions: that the 300 billion gallons (1.135
billion cubic meters) stored in the National Aqueduct would be heated to 185 F (85 C ),
with a national average outside temperature of 55.7 F (13.16 C). Relying on the formulas
from Engineering Toolbox, we'll assess that 300 billion gallons would have a potential

energy output of 86.2 billion kilowatt-hours. At an estimated efficiency loss of 10%, that
comes to 77.58 billion kilowatt-hours of potential energy.

Subtotal electricity generation over a calendar year:

Internal turbines: 786.4 million kilowatt-hours

Solar panels: 163.1 billion kilowatt-hours
Hot water inside pipelines: 77.58 billion kilowatt-hours.

Total annual electricity generation: 241.5 billion kilowatt-hours.

Estimated Total Cost of National Aqueduct: $1.01 trillion at an output of 241.5 billion
kilowatt-hours of energy per year and 133 billion gallons of water per day.

Electricity Totals

Liquid Fluoride Thorium Reactors are estimated to cost $1.93 trillion and generate
6.77 trillion kilowatt-hours annually.

Solar road surfaces are estimated to cost $1.406 trillion and generate 363 billion
kilowatt-hours annually.

The National Aqueduct is estimated to cost $1.01 trillion, generate 248 billion
kilowatt-hours annually, and provide 133 billion gallons of fresh water per day.

Combined: $4.35 trillion to generate 7.38 trillion kilowatt-hours and transport 48.5 trillion
gallons of fresh water per year.


After covering electricity generation and the cost of those systems, we'll shift gears to the
systems that synthesize water and fuel. In doing so, however, we won't be estimating their
implementation in greater Energy Plants (which would likely be the case in practice). This
is because cost figures for cogenerating Energy Plants are not yet present, so we'll instead
estimate the cost of building these systems on a standalone basis (with the exception of
constructing water desalination facilities without internal power plants) even though it
would translate to far higher costs in this estimate:

Seawater Desalination:

Most modern desalination facilities today are in the Middle East. Although they are capable
of desalinating immense volumes of seawater, they generally are self-powered with internal
power plants. This makes their construction significantly more expensive than desalination
facilities would be within Energy Plants.

As the backbone of Universal Energy is provided by LFTRs, desalination facilities wouldn't

need their own external power infrastructure in this model nor would they need to
consume as much additional energy, as routing desalination functions with the non-
radioactive heat exchangers of LFTRs in a cogenerating capacity would dramatically reduce
external energy requirements.

Because of this, desalination plants will cost far less in the Universal Energy framework
than they do as standalone entities today. So to come to a cost estimate, we'll need to make
a few more assumptions:

The largest desalination facility in the world is currently the Ras Al Khair
Desalination Plant in Saudi Arabia. It has the capacity to produce 270.8 million
gallons of water per day (1.025 million cubic meters) via both multistage flash and
reverse osmosis. That translates to 98.8 billion gallons of water per year (375 million
cubic meters). It cost $7.2 billion to construct, and is also a 2,400 Megawatt power

The Jebel Ali facility in the United Arab Emirates outputs 140 million gallons of water
per day via multistage flash distillation (530,000 cubic meters). That translates to 51.1
billion gallons a year (193.4 million cubic meters). The facility cost $2.72 billion to
construct, and is also a 1,400 megawatt power station.

The Fujairah power and desalination plant in the United Arab Emirates cost $1.2
billion to construct. It generates 656 megawatts of power and outputs 100 million
gallons of water per day (378,500 cubic meters). Over a year, that comes to 36.5 billion
gallons a year (138.17 million cubic meters).

As noted above, an important component to using these facilities to create a cost estimate is
the presence of power generation. The Fujairah facility only cost $1.2 billion to construct
whereas Ras Al Khair cost $7.2 billion but Ras Al Khair has a 2,400 megawatt plant that
powers the facility and Fujairah's power plant only outputs 656 megawatts. Its power
generating potential is nearly four times higher, but in terms of desalination (270 million
gallons per day versus 100 million), it's water output is only 2.7 times higher. As Universal

Energy's desalination facilities don't need external power infrastructure, our cost estimate
must separate that element out.

To do so, we'll head over to the Energy Information Administration to get a general idea of
the construction costs of a power plant.

According to the EIA, a Natural Gas-fired Combined Cycle power plant (Adv Gas/Oil
Comb Cycle CC) power plant has an overnight cost of $1,080 per kilowatt for a 429
megawatt variant. That means a 429 megawatt power plant would cost $463.2 million to
construct, or roughly $1.08 million per megawatt.

While construction costs likely vary in the Middle East, we'll nonetheless stick to this cost
figure in the absence of more reliably specific data. Additionally, as the Ras Al Khair facility
is both multistage flash and reverse osmosis (disproportionally increasing its cost), whereas
Jebel Ali and Fujairah are strictly multistage flash, we'll only use Jebel Ali and Fujairah to
estimate what a standalone desalination facility would cost if it didn't include a power

Jebel Ali: $2.72 billion to construct with a 1,400 megawatt power station. Annual output:
51.1 billion gallons (193.4 million cubic meters).

At $1.08 million per megawatt, we'll estimate that $1.51 billion of the construction cost was
for power generation. This would bring the estimated construction cost, sans-power, to $1.2

Desalination costs for one year output: $0.023 per gallon / $6.20 per cubic meter.

Fujairah facility: $1.2 billion to construct with a 656 megawatt power station. Annual
output: 36.5 billion gallons (138.17 million cubic meters)

At $1.08 million per megawatt, we'll estimate that $709 million of the construction cost was
for power generation. This would bring the estimated construction cost, sans-power, to
$493 million.

Desalination costs for one year output: $0.013 per gallon / $3.67 per cubic meter.

Averaging these together, that comes to $0.018 (1.8 cents) to desalinate a gallon of water
and $4.94 for a cubic meter.

We determined earlier that as the U.S. consumes 239.5 trillion gallons per year (920.8 billion
cubic meters), which translates to 667 billion gallons of water. The National Aqueduct is
intended to provide 20% of that figure, coming to 133 billion gallons per day, or 48.5 trillion
gallons per year. (180 billion cubic meters).

At a price of 1.8 cents per gallon, constructing facilities with a capacity to produce 48.5
trillion gallons per year would cost an estimated $853.2 billion.

Total estimated cost for water desalination: $873 billion.

Hydrogen production:

Analysts from the Department of Energy estimate that hydrogen can be produced (Factory
Gate Price) by way of water electrolysis for $3 per kilogram of contained hydrogen, at an
energy price of $0.045 (4.5 cents) per kilowatt-hour.

As hydrogen's role in the Universal Energy framework is to produce fuel, we'll look at our
domestic gasoline usage as a metric as opposed to overall petroleum consumption.
According to the Energy Information Administration, the U.S. consumed 140.43 billion
gallons of gasoline in 2015. Although this model envisions the majority of cars migrating to
electric due to Universal Energy's material advancements, we'll still assess the cost of what
it would take to have hydrogen replace gasoline in our society in terms of production.

As hydrogen production via electrolysis is measured in kilograms, we'll use specific energy
to calculate our comparison.

Gasoline has a specific energy of 46.4 megajoules per kilogram. One gallon of gasoline has a
mass of roughly 2.8 kilograms. As such, 140.43 billion gallons of gasoline would have a
mass of 393.2 billion kilograms. At 46.4 megajoules per kilogram, that comes to 8.47 billion

Compressed hydrogen has a specific energy of 142 megajoules per kilogram. To produce
8.47 billion megajoules of energy through hydrogen, we'd need 57.6 million kilograms of
compressed hydrogen on an annual basis.

According to the Department of Energy, a hydrogen production facility today with an

output of 50,000 kilograms of compressed hydrogen per day has a cost of $900 per kilowatt
of system energy with a multiplier factor cost of 1.12 for installation, coming to $1,008 per
kilowatt of system energy. A 50,000 kilogram per day plant has a system energy of 113,125
kilowatts, which would make its estimated capital cost $114 million.

Dividing $114 million by 50,000 kilograms daily output, we'll assess that the capital costs of
a hydrogen production plant are $2,280 per kilogram of daily production capability. As the
United States would need 57.6 million kilograms of compressed hydrogen to replace
gasoline in our society, at $2,280 per kilogram of daily production capacity, that comes
to $131.33 billion. At a market sale of $3 per kilogram from the Public Interest Company,
that would generate $172.85 million per year.

Total estimated cost for hydrogen production: $131.33 billion for facilities that generate
$172.85 million annually in revenue at a market cost of $3 per kilogram of compressed


Cost of electricity: $4.35 trillion to generate 7.38 trillion kilowatt-hours and transport 48.5
trillion gallons of fresh water per year.
Cost of water desalination: $873 billion
Cost of hydrogen production: $131.33 billion

Subtotal: $5.35 trillion.

Extra buffer for unforeseen costs: this cost estimate is compiled from the best-effort research
I was able to perform in the various sectors these technologies exist within. However,
although the framework is sound in concept, deploying such a large-scale public works
effort on a country as large and environmentally diverse as ours may present challenges
that I simply cannot effectively estimate at this time. For this reason, even though this
estimate is high in general, we'll include an additional cost buffer of 20%, or $1.15 trillion
(115 billion over a 10 year implementation period). With this buffer included, the grand
total of Universal Energy's implementation comes to $6.5 trillion spread out over a 10 year
period, which is roughly 1/2 our present military spending.

Grand total: $6.5 trillion


Beyond the $1.15 trillion cost buffer mentioned above, this estimate includes higher-than-
likely costs out of sake of intellectual honesty as it's driven by several assumptions. At the
end of the day, it's better to estimate high than low. But in reality there are myriad price
reductions that would likely apply should this implementation strategy go forward:

Learning Ratio: As we saw from Chapter 2, learning ratio is the applied concept of
'learning by doing,' which means price reductions come through learned efficiencies and
experience by building systems. The 'ratio' aspect of it is the reduction in price every
time the number of produced units doubles. If it's a 10% ratio after the 100th produced
unit, unit number 200 would cost 10% less than unit number 100. Unit number 400
would cost 10% less than unit number 200, and 20% less than unit number 100, and so
on. If you recall back to the original invention of computers, flat screen televisions,
smartphones, etc,. the models we see today are vastly superior and less expensive than
the initial releases they evolved from.

In terms of commercial products, companies improve models to sell at the same price, so
while the price of a new iPhone, for instance, hasn't dropped the power and
performance capabilities of an iPhone 7 are several orders of magnitude higher than
those of the original iPhone.

Learning ratio applies strongly within Universal Energy's technologies as most are in
their technical infancy and stand to enjoy substantial improvements through greater
investment and research. As such, the price of LFTRs, National Aqueduct infrastructure,
solar road systems, hydrogen production systems and even water desalination facilities
stand to see massive price reductions as prefabrication and mass production of these
technologies reaches full stride.

As it's prohibitively difficult to accurately assess what these reductions might look like
they were not incorporated in the pricing model. However, in reality they would be
significant, almost certainly in the hundreds of billions of dollars over time.

Energy Plants: Energy Plants are the envisioned approach for large-scale
implementation of Universal Energy's power, fuel and fresh water resources because
they can operate in a cogenerating capacity. As they can use the waste/excess energy
from one facility to power the functions of another in the same physical footprint, the
energy costs to perform functions like water desalination and hydrogen production drop
drastically. Just as importantly, the capital expenses of constructing power plants
incorporates the cost of buying land. By building multiple systems within the same
facility, the cost of land is proportionally shared as are the costs of construction. This,
in effect, would make it less expensive than presently estimated to build all of the
systems discussed herein.

Energy cost reductions: Universal Energy's primary purpose is to generate an effectively

unlimited amount of energy at a low enough cost to make possible the large-scale
synthesis of critical resources. Yet while this is intended to solve the core, pressing

problems of our civilization, it also makes it a lot less expensive to do business and
manufacture things. As we'll see later in Chapter 15: The Energy Economy, energy costs are
a huge component of a company's bottom line, especially in manufacturing sometimes
upwards of 15-30%.

If we're able to reach Universal Energy's target of 2 cents per kilowatt hour, that's many
billions of dollars that businesses don't have to pay to build products to take to market,
the systems behind Universal Energy being no exception. That's billions of dollars that
no longer need to be incorporated in the per-unit delivery cost of energy and resource
production systems, which in turn presents billions of dollars in cost savings to their
large-scale purchase and implementation.

Reduced social afflictions: Universal Energy is designed to solve resource scarcity so

that unlimited energy and resources in turn can solve the myriad social afflictions fueled
by resource scarcity afflictions that suck immense funds, time and concentration from
our society. Poverty, crime, economic depression, failing infrastructure, lost hope and
lackluster employment among them. All of these problems consume huge percentages of
public budgets the war on drugs, alone, for example costs $50 billion per year. As
dramatically reduced energy and resource costs address these afflictions, the resources
we presently devote to their mitigation can be spared in kind saving even more money.

As you can see, with the presence of these cost reductions in practice it is highly likely that
the present cost estimate of $6.5 trillion for Universal Energy's implementation is on the
high end, thus any cost savings we can obtain along the way, should this model be
implemented, would simply be "gravy" on top and allow us to increase the scale of
implementation in kind.

Solar Road Panel Pricing

Wattway by Colas has a pricing structure of around $6.70 per watt-peak. Watt-peak is a
term used to describe peak energy output under optimal sunlight, so if for example a solar
panel had a peak power output of 50 watts, according to Colas that panel would cost about
$335. However, as Colas itself notes, this cost figure is at the prototype stage and automated
manufacturing of Wattway panels has yet to transpire earnestly. As this occurrence can
reduce production costs significantly, we'll assume that once Wattway panels are built en-
masse we may see a price reduction of up to 15%. That brings the panel cost, over time and
on a large scale, to be $5.70 per watt-peak a figure that will only reduce over time as
investments in solar panels continue to bear fruit in terms of efficiency and power output.

According to Colas, 42 slabs (59 m2) had a power output of 6 kilowatt-peak. That would
mean each slab has a surface area of 1.4 square meters and a peak power output of 142
watts. As 1.4 square meters has a surface area of just about 15 square feet, that comes to
approximately 9.5 watts per square foot at a cost of around $54 per square foot.

Solar Roadways has a more robust prototype with burlier features than Wattway, and their
external pricing data isn't approved for public release as of yet. As such, we'll have to make
some educated assumptions to determine their cost. Additionally, unlike Wattway where
panels are simply glued down, Solar Roadways requires road surfaces to be modified to
allow for panel installation. While this is a benefit in the context of advancing the
sophistication, longevity and strength of our road surfaces, it's still an additional cost that
needs to be incorporated in any pricing model.

The latest Solar Roadways prototype is a 4x4 hexagonal solar road panel which has an
area of 4.39 square feet (0.4 square meters) that has a peak output of 48 watts. The original
prototype was a 10'x10' panel that was estimated to cost $10,000 so about $100 per square
foot. If we were to apply that same price to the hexagonal panels, that's $439 per 48-watt
panel. However, the added cost of installation would be an additional overhead cost, which
we'll assume to be a 35% increase per-panel. Yet at the same time, mass production reduces
costs significantly. If we were to apply the same 15% reduction as we assumed with
Wattway, that would mean their 48-watt panel would cost about $500. If we were to scale
down to the square foot, that comes to approximately 11 watts per square foot at a cost of
$114 per square foot.

With this established, let's assess the performance of both prototypes over a calendar year.
Solar panel performance is measured in something called "peak sun hours" which is the

aggregate of the amount of solar energy an area receives at a given day if framed in terms
of peak output. So let's assume for example a solar panel works for 10 hours a day at varied
efficiencies, but what if that entire length of time were condensed into a time period where
the panel operated only at maximum efficiency? That's "peak sun hours" for all intents and
purposes, and it's the metric most solar systems are rated on. According to the National
Renewable Energy Laboratory, the American southwest has up to 6.5 peak sun hours
(which can get even higher the closer one gets to the equator), so we'll use that metric to
assess maximum performance of solar road panels.

Additionally, solar panels vary in performance based on time of year and temperature
(believe it or not solar panels are more efficient when it's colder, and can still work well
even on overcast/cloudy days), so we'll assume an across-the-board hit of 15% to account
for lapses in efficiency due to climate, weather, traffic, etc. With that in mind, that brings
Wattway's peak output to be 8.1 watts per square foot, and Solar Roadways to be 9.35 watts
per square foot.

Across an operating window of 6.5 peak sun hours, that brings us to a total of 52.65 watt-
hours per square foot generated by Wattway on a given day, and 60.8 watt-hours per
square foot for Solar Roadways. With that in mind, across a calendar year that comes to
19,217 watt-hours for Wattway and 22,192 watt-hours for Solar Roadways. Extrapolated
into kilowatt-hours, that comes to 19.22 kilowatt-hours for Wattway at and 22.2 kilowatt-
hours for Solar Roadways.


With these figures in mind, we'll assume that under optimal conditions Wattway by Colas
panels generate 19.22 kilowatt-hours of energy every year per square foot at a cost of $54
per square foot.

In the case of Solar Roadways, we'll estimate they generate 22.2 kilowatt-hours of energy
every year per square foot at a cost of $114 per square foot.

Rails to Trails Conservatory Road Cost

The following data was provided by the Rails to Trails Conservatory to estimate the
cost of road costs. A separate, more extensive report can be found here.

Several variables affect the cost of construction on highway projects. Examples of such
factors might include terrain type (mountainous or flat), development type (rural or
urban), geographic location (high-cost or low-cost State), type of highway (Interstate
freeway or two-lane local highway), material type (concrete or asphalt), and pavement
thickness, (which depends largely on projected auto and truck volumes). Costs may
also be different depending on whether the project involves construction of a new
highway or adding lanes to an existing facility. In November of 2003 a study was
completed for the FHWA (Federal Highway Administration) that provides estimates of
highway construction cost per lane-mile based on information from several states based
on current design procedures and cost factors.

All costs shown in the following text have been adjusted to 2006 dollars.

Adding a Single Lane to an Existing Highway:

FHWAs Highway Economic Requirements System (HERS) includes input values for
the typical costs of a variety if highway improvements, including the cost of adding a
lane to an existing highway. The unit cost per lane-mile for adding an additional lane
includes a portion of the cost to cover bridges, interchanges, environmental issues, etc.
for a normal project. However, a project with a large number of bridges, complicated
interchanges, major environmental issues, and other extreme engineering and
environmental issues will result in a higher cost per lane-mile.

Separate cost factors are used for urban and rural areas. In urban areas, widening costs
are further disaggregated by the type of roadway (freeways, other divided highways,
and undivided roads), and vary from $2.4 million to $6.9 million per lane-mile. In rural
areas, costs depend upon highway functional class (Interstates, arterial roads, and
collectors) and terrain type, and range from $1.6 million to $3.1 million per lane-mile.

The model also assumes higher construction costs in areas where widening might be
especially difficult or costly, such as densely developed urban areas or environmentally

sensitive rural areas. These are termed high cost lanes and can range from $7.3
million to $15.4 million per lane-mile for construction in urban areas to $5.8 million to
$9.9 million per lane-mile in rural areas.

New Construction:

New construction costs can vary widely due to a number of factors, and the FHWA
does not have a standard value for them. However, the November of 2003 study
produced estimates of highway construction costs based on information from several
states based on current design procedures and cost factors. These costs have been
adjusted to 2006 dollars.

The cost to construct one lane-mile of a typical 4-lane divided highway can range from
$3.1 million to $9.1 million per lane-mile in rural areas depending on terrain type and
$4.9 million to $19.5 million in urban areas depending on population size. However, in
urban areas restrictions (high cost of additional right-of-way, major utility relocation,
high volume traffic control, evening work restrictions, etc.) may increase the cost per
lane-mile. If restrictions exist the cost to construct one lane-mile of a 4-lane divided
highway can range from $16.8 million to $74.7 million. The cost of $74.7 million per
lane-mile in areas of severe restrictions may not represent the maximum cost per-
lanemile and should be used as general guideline only. Individual projects may include
extreme conditions warranting a much higher cost.

The costs provided are per lane-mile. To obtain the cost for a section of roadway the
cost would need to be multiplied by the number of lanes on the roadway section.

The Coming Resource Crisis

In the foreword of this writing, Why We Fight, we discussed why most conflicts are sparked
by resource scarcity and the economic damage caused as a result. Civilization requires
resources to operate, so when they become scarce states attempt to secure them by
whatever means necessary thus militaries are mobilized and war is waged.

As we know resource conflict is not a new phenomenon, there have been multitudes of
social thinkers who have feared great bloodshed once humanitys expanding population
eventually coincided with dwindling resources on a global scale.

Historically, perhaps the most famous alarm raised was the Principle of Population, an essay
published by Thomas Malthus in 1798 that predicted dire consequences once our numbers
exceeded our ability to feed ourselves. His concerns were echoed 174 years later
through The Limits to Growth, a study commissioned in 1972 that concluded a bleak future
awaited humanity should our population growth continue unabated in a world with finite
resources. Numerous other studies before and since made the same predictions and drew
the same conclusions. Yet in doing so they all shared a singular trait: none have yet come to

Indeed, the world is still here. Civilization is still standing. We have water, food, oil and
building materials. The cataclysm has not occurred. This has led many to conclude that those
worrying about resource scarcity are simply subscribing to unfounded alarmism, a modern-
day equivalent to a boy crying wolf. But today, things are uniquely different. And the facts
as they stand now show that Malthus and those like him were not wrong they were just a
little hasty.

To explain why this is the case, lets take a step back and look at our circumstances from a
high-level view:

As a species, human beings have been around for about 200,000 years (although latest
estimates may put that as high as 300,000). 95% of that time involved a cave man lifestyle,
mostly hunting and gathering of natural resources and food sources. This continued until
around 12,000 BCE, where we discovered basic farming techniques and learned how to
domesticate animals, a breakthrough commonly referred to as the Neolithic Revolution.

Since that time, humanity has continued to evolve to become more sophisticated.
Civilizations rose and fell, wars were fought, discoveries were made, and we continued to
advance upward as did our population.

At the dawn of WWI in 1914, Humanity had just reached 1.7 billion people, an event
200,000 years in the making. At 200 millennia and 2,000 centuries, thats 100 times the
timespan between today and the height of the Roman empire. Yet only one century later in
2016, weve surpassed 7.4 billion twice the population of 1972, the year The Limits to
Growth was published. By 2040, our population is expected to reach 9 billion people 10
billion by 2050. Our population has already grown 430% in the last 100 years, and by 2050 it
will have grown 600% from the start of the 20th century.

This exponential population

growth has occurred in an era
of intense social advancement
that has been powered
entirely by natural resources.
Thus, our rapid population
expansion has rapidly
accelerated our rate of
resource consumption. In
turn, this has rapidly
accelerated our rate of
resource depletion on a global
scale depletion that is in no
way sustainable with the way
the world works today.

To gain a clearer perspective of this impact, consider the following:

By most estimates, our planet is 4.6 billion years old. If we were to scale that timeline
down to a century, it would mean Earth is 46 years old and the entirety of human
existence began 4 hours ago with the industrial revolution starting in the last 60
seconds. Since that time, weve destroyed more than half of the worlds forests.

Although controversy exists on solutions to climate change, its well understood that
more carbon dioxide in the atmosphere is making the planet warmer. As
temperature and dryness increase beyond a certain level, crop production drops.
The United Nations and several respected science journals independently conclude
that this will severely impact food production, with the World Bank estimating that

crop yield will reduce by as much as 25% by 2050. Yet at the same time, both warn
that the world must produce far more food than we do today to feed our growing
population worldwide, respectively estimating figures of 50% 75%.

Several other important resources are also depleting globally: coal, phosphorous,
certain hardwoods and thousands of plant and animal species. Combined, a majority
of scientists believe that humanity has caused Earths 6th great extinction event, the
Holocene Extinction, and conclude that most of Earths biodiversity will be extinct
within the next three centuries if present trends continue.

The reality behind these examples is unquestionably grim. To be sure, weve been
forewarned of a lot of grim realities lately. But this isnt just another grim reality it
is the grim reality, one at the root of nearly every large-scale social problem we face as a
species. The problems spawned by this reality have begun to converge, and as our situation
is unsustainable, these problems will grow in number and severity until we reach what will
ultimately become an eventuality: that as natural resources become more scarce, the
potential for global conflict over them spikes dramatically. This is not something that risks
occurring on a small-scale, it risks occurring on a global scale, posing severe consequences
to our collective futures.

Among the resources that face growing global scarcity, there are none more critical than
water, inexpensive oil and food. This section concentrates on the nature of their scarcity and
the potential consequences that come with that, so that we can be given a basis to shift the
discussion to how this problem may be solved.


Fresh water is the single most important resource for human existence, for without it
nothing grows and nothing lives. As our population has expanded, so has our water usage,
and today we are consuming more fresh water than nature can naturally replenish.

Weve already seen some of the consequences brought by this problem. Wildfires
of unprecedented severity destroying millions of acres. Poor crop yields that have impacted
local economies and food prices. In summer 2012, more than half of the United States was
declared a federal disaster zone due to record droughts. Until a record rain + snowfall year
in California during 2016-17, the state was crippled by devastating droughts.

To explain how this translates into future consequences, well start by focusing on a metric
referred to as the Water Stress Threshold, which a global standard to measure the social

and economic impact of drought conditions. This threshold establishes a minimum amount
of water a given region must have available to ensure adequate supply for its human
population. According to the United Nations, this figure is 1,700 cubic meters of water
(449,092 gallons), per person, per year.

Anything below that figure is considered water stress, which means the region faces
degrees of water shortages. Anything below 1,000 cubic meters is considered water
scarcity, which means there is not enough water to satisfy the needs of a region, leading to
detrimental social impact. Anything below 500 cubic meters is considered absolute water
scarcity, which frequently translates to humanitarian crises and social unrest.

Today, 1.1 billion people lack access to clean water, and 50% of the planet doesnt have
access to the quality of water the citizens of ancient Rome did. The United
Nations estimates that by 2025, two billion people will live in conditions reflecting absolute
water scarcity and 5.3 billion people will be living in conditions reflecting water stress. For
the record: thats respectively 28% and 75% of the planet.

When applied to all of humanity, the water stress threshold comes to 3.2 quadrillion gallons
consumed annually. The average volume of the great lakes (20% of Earths freshwater
supply) is roughly 6 quadrillion gallons at any given moment. That means our civilization
consumes their combined volume every two years at a rate that is accelerating. According
to the United Nations, global water demand between now and 2050 will increase 55%, with
the bulk of demand occurring in developing countries. Goldman Sachs financial
forecasters estimate that global water consumption is doubling every 20 years, and the
United Nations expects demand to outstrip supply by more than 30 percent come 2040.

However, lakes and rivers only make up a portion of our water sources. Most places source
water from groundwater, which can be difficult to access and often exists far away from
population centers. So once groundwater runs low, water needs to be sourced from
elsewhere as migrating a city isnt feasible.

This leads to accelerating rates of consumption over a wider area, which leads to
proportionally accelerating rates of depletion. In areas where rainfall is fueled from soil-
based water evaporation, this leads to decreased rainfall elsewhere powering a
downward spiral towards greater drought conditions.

To illustrate this trend further, well review a series of visuals from various sources:
scientific, educational and real-world snapshots of circumstances on the ground. No single
source here is intended to prove hard conclusions by themselves, but will rather illustrate a
pattern of both domestic and global water scarcity that is worsening over time in scale and

severity, due to factors ranging from environmental to overconsumption by humans. The
first item well consider is a comparison map generated by the University of
Nebraskas drought monitor service, which uses data compiled by public agencies to
provide statistics for domestic drought.

It uses a measure known as the Palmer Drought Severity Index to assess relative dryness
based on rainfall and temperature. The date ranges this map covers are between August
29th, 2000, and August 23, 2012:

From these charts, we see the simple reality that, overall, the United States has gotten
significantly drier in just over a decade and not only has this drought expanded, it's also
more severe.

From this chart and the chart

above, note the color
saturation of extreme and
exceptional drought
between 2000 and 2013 its
increased roughly 20%. A
severity increase of this
degree is rapid for a
timespan as short as 12
years, points of focus well
be touching in more detail
throughout this section.

The Palmer Drought

Severity Index illustrates drought impact through a five-tiered scale, between D0 and D4:

D0. Abnormally dry. Going into drought: short-term dryness slowing growth of crops or
pastures. Coming out of drought: some lingering water deficits; crops or pastures not fully

D1. Moderate drought. Damage to crops or pastures; streams, reservoirs, or wells low;
some water shortages developing or imminent; voluntary water use restrictions requested.

D2. Severe drought. Crop or pasture losses likely; water shortages common. Water
restrictions mandated by government.

D3. Extreme drought. Major crop and pasture losses; widespread water shortages.
Widespread restrictions put in place.

D4. Exceptional drought. Exceptional and widespread crop/pasture losses; shortages of

water in reservoirs, streams and wells, creating water emergencies.

Charts and maps such as the ones above are helpful to emphasize drought data, but they
dont quite illustrate the tangible impact of this problem. To see further, well review
several comparison images of drought conditions over varied landscapes, starting first with
the state of California.

Californias water issues have dominated news headlines for the past seven years as
drought conditions were worsening every year until an anticipated El Nio weather cycle
presented a badly needed respite. Well take a minute to review these pre-2016 conditions
to establish the kind of circumstances that were leading up to record drought:

These images compare Folsom Lake, one of the most important reservoirs for Los Angeles
and Southern California. The image on the left shows Folsom lake in 2011. The image on the
right shows it in 2015:

These images show Lake Oroville in Northern California, the states second-largest
reservoir. The image on the left shows levels in 2012, the right shows levels in 2015:

Lake Oroville from another angle, respectively from 2012 (left) to 2015 (right):

This picture shows the above bridge compared to the overall depth of the lake:

California is the most populated state in the United States, and this depletion occurred over
just three years in its second-largest reservoir. Until 2016, most experts were viewing the
worsening state of Californias drought as a catastrophe in the making and the weather
cycle that spared California in 2016 only occurs once or twice a decade. But domestic
drought isnt limited to California, either, as Texas, Midwestern and Southwestern states
face similar situations. The following image shows the Drought Monitor map for the state
of Texas, year 2011:

Keep in mind that nearly every

county on this map hit drought level
D4, which translates to exceptional
and widespread crop/pasture losses;
shortages of water in reservoirs,
streams, and wells, creating water

When you see the scale of these

droughts and their spike in severity,
you likely recognize that climate
patterns go in cycles and where one
year is dry, another is wet. California
was in rough shape from 2010-2015,
2016 had a great year, and theyre in
a much stronger position. Our time
today is certainly not the first time to face droughts, even severe droughts and were

doing a whole lot better today than people were in the early 1930s. So how do we know
were not just in another natural drought cycle? Is this really cause to worry?

Its quite likely that natural drought conditions contribute to the water scarcity we see
today. But even if natural forces are aiding these circumstances, the causes for worry
present themselves due to human factors that make the problem worse. Our time is unique
in human history in terms of both increased population and overall water consumption,
and the severity, scale and frequency of drought to today is worse than it was in the past.

The following is a compilation of annual snapshots from the National Oceanic and
Atmospheric Administration (NOAA)s index of national drought data. They show the
month of July from year 1900 to 2016.

When we compare historical data, we certainly see that there have been previous droughts
periods in the past, with 1934-1936, 1954-1956 and 1986-1989 as standouts. But those time
periods differ from ours in a couple of important ways.

First: our population back then was smaller (126 million in 1934, 163 million in 1954 and in
240 million in 1986), meaning the impact of those drought cycles affected a population
thats between 40%-75% of what the U.S. is today (324 million). Just as importantly, those
years didnt have an economy as water-demanding as ours is today as industry and
agriculture presently comprise roughly 92% of our national water consumption.

Second: the droughts were facing now are worse in both frequency, scale and severity.
With the exceptions of the 1930s, the worst drought periods before year 2000 had a smaller
area of extreme or exceptional drought and areas suffering drought of this severity usually
did so only for a season or two.

The worst drought period before year 2000 was from 1930-1940 and had 7 of 10 years in
extreme drought conditions that were nationally widespread.

The next worst was from 1950-1964, which had 11 of 15 years in extreme drought
conditions, albeit generally in smaller, more localized areas.

From year 2000-on, 14 of the past 17 years have seen extreme drought conditions that are
largely spread out over the majority of our national landscape affecting a population
that's 40%-75% larger than it was when previous drought cycles occurred.

Simply stated: this drought is widespread, it is more severe and it is sustained.

Historically, a drought year would be balanced by a wet year, but wet years today are
generally less frequent and less wet, whereas dry years are more frequent and more dry.
Conditions today are significantly drier than the past, and to top it off our water
requirements have increased proportionally to our population expansion and greater need
for irrigation amid larger scales of agribusiness.

None of this occurs in a vacuum. As our water needs have increased alongside greater
drought conditions, weve needed to source water from elsewhere, and in doing so weve
increasingly turned to aquifers. Aquifers are large sources of groundwater that dont
replenish quickly with natural water cycles, and instead source their water from soil
absorption and water tables over time. During dry years, weve tapped deeper into
groundwater aquifers, and today many are being depleted at a rate faster than they can

Data from NASA satellites show that 21 of the worlds 37 aquifers have passed their
sustainability tipping points, meaning they will run dry if current trends continue. Aquifers
supply 35% of global water use, and they are among the last reliable sources of water we

have left. California is already tapping aquifers for up to 60% of its water use, and scientists
expect aquifers will be relied upon to even greater extents in the future. The following links
provide additional data for those inclined: [Source 1] | [Source 2] | [Source 3]

Yet even with the seriousness of domestic drought put in perspective, worldwide, the
problem worsens. For a good example, take a look at the Aral Sea, which was once the 4th
largest lake on the planet. In 1960, the Aral Sea had a surface area of 26,300 square miles
(68,000 km2) and a total volume of 254 cubic miles (1,063km3).

For comparison, that is 4,000 square miles larger than Lake Michigan by surface area and
2.2 times larger than Lake Erie by volume. Of the two images below, the image on the left
shows the Aral Sea in 2000, and the image on the right shows it in 2007. Take care to notice
the approximate shoreline in the bottom-left image, reflecting water levels in 1960 at the
height of the lakes volume.

Source: NASA

The following images show the Aral Sea in 2010 (left) and 2014 (right).

Source: NASA

Today, the Aral Sea is more commonly known as the Aralkum Desert that comprises its
eastern basin, and its transformation to desert happened so quickly that ships are littered
throughout the basin as they couldnt escape before the water depleted.

Again: as once the 4th largest lake on Earth, this area used to contain a body of water 4,000
square miles larger than Lake Michigan by surface area and 2.2 times the volume of Lake
Erie. And in a period of just 14 years, it went from half-full to Earths most recent desert.

These examples illustrate a state of affairs that is being experienced on every inhabited
landmass on the planet. Africa, Australia, South America, The Middle
East, Central and East Asia each carry their own multitude of examples that reinforce in

data what we can see with our eyes: our planet is getting drier as our population expands
and our water resource needs accordingly increase. That is the state of water scarcity today.
But as this problem is getting worse, it begs a more important question: what will be the
state of water scarcity tomorrow?

Its not a question that comes with a precise answer. After all, its difficult to accurately
predict next months weather accurately predicting water scarcity and relative drought
decades in advance and on a global scale is effectively unfeasible. However, as we saw with
the maps above we have a rich base of historical data we can use for comparison. As recent
years have been progressively drier, by comparing this trend to past years we can model a
rate of desertification if our situation continues to worsen at the rate it is presently.

NCAR (the National Center for Atmospheric Research) produced a series of climate maps
that display the results of these models. They arent meant to be hard predictions as far too
many factors influence climate and drought, nor do they incorporate the possibility of our
situation declining faster. They only model desertification trends if things remain as they
are today, and well look at four of these maps for 10-year periods between year 2000 and
the end of this century to see what the continuation of current trends would look like:


With these maps, recall that anything down to -4 on the Palmer index is considered
extreme drought relative to conditions today. Most of the world will be at least that, if not
exceptional drought, which is shorthand for water emergencies. Noting again that these
maps are future illustrations of current trends as opposed to hard predictions, if our
circumstances accelerate in severity, the likely result will be worse conditions than appear
here. Should that come to pass, it risks the survival of billions of people saying nothing of
supporting an ecosystem and even first-world nations will face major complications to life
as they know it.

While scientific predictions and models are not always 100% accurate, its important to
remember what kind of scientists are causing alarm to present trends and what methods
theyre using to justify their concerns. This isnt Channel 7s weather forecast this is
NASA, NOAA, NCAR, UCAR and their counterparts across the planet, more or less the A-
team of climate science. The smartest people in the world have devoted years to studying
this problem, they collectively use billions of dollars in state of the art technology to
perform their research and their concerns are reviewed and affirmed by their global peers.
Wed be wise to trust they know what theyre talking about.

But regardless of how troubling this is by itself, its not the only angle to this problem as
there is also the issues of water quality and water privatization to consider.

It might seem strange to us as Americans, but worldwide, fresh water isnt necessarily
potable water and in many cases its not. For all the fresh water we are consuming, of what

remains much of it is too toxic for human consumption. Whether from improper sewage
and sanitation, industrial pollution or runoff from agriculture, many of the water sources
people have depended on for millennia have now become undrinkable.

Nearly a billion people worldwide lack access to clean water and 2.5 billion do not have
access to adequate sanitation, seeing 6-8 million people die every year from water-borne
illnesses. For comparison, that's roughly twice the population of Los Angeles.

Beyond the obvious public health implications, this problem also harms local economies.
Scarcity combined with unreliable cleanliness requires people to devote extensive time and
energy to secure clean water, which presents large opportunity costs. Common examples
include: children spending less time in school, family members forced to choose between
working or retrieving water (often from long distances), and an unhealthy populace that all
translate to a weakened workforce.

Collectively, 40 billion hours are spent each year in Africa alone to collect and haul water.
Not only does this present a massive drain on the resources available for public health, it
also makes the problem self-perpetuating as diminished economic output and unhealthy
living conditions hinder the ability of people to improve their conditions. This forces them
to remain bound to the consequences of ever-increasing water scarcity and the afflictions
that come with it.

This problem isn't limited to remote, rural regions far away from global commerce either.
It's also deeply affecting countries with major influence over global affairs. Take China for

Due to industrial contamination, upwards of 60 percent of water sourced from Chinese

rivers is unusable for drinking, bathing or agriculture according to statistics published by
the Chinese Ministry of Water Resources. That number rises to 70 percent when it comes to
Chinese lakes and 80 percent when it comes to underground wells that source groundwater
90 percent when that groundwater is sourced near cities. A photo journal from Business
Insider helps frame this problem in its appropriate starkness.

Then theres India. Upwards of 50% of India faces High to Extremely High water stress,
according to the World Resources Institute as published in an article by the New York
Times. It further states that drought conditions have become so severe that 330 million
people greater than the population of the entire United States are living in a dust bowl.
On top of that, the nation's coal-fired power plants are shutting down as theres not enough
water to generate steam with armed guards being posted at dams to prevent water theft
from desperate farmers.

With circumstances already critical, rampant pollution intensifies this problem. A recent
report by WaterAid, an international organization promoting greater water sanitation and
hygiene, estimates that 80% of Indias surface water is severely polluted and rife with
disease. A separate report from India's Centre for Science and Environment estimates that
roughly 80% of Indian sewage flows untreated into its rivers, and that out of 8,000 towns
surveyed by a pollution control board only 160 had both functioning sewage systems and a
sewage treatment plant [Link to Full Report]. If you're curious as to what this impact looks
like, the following images clarify graphically.

For the record: Chinas population is 1.36 billion people. Indias is 1.25 billion. That's 35% of
humanity, and 85% of the world population lives in the driest half of the planet.

Beyond pollution, times of scarcity create opportunities to capitalize on desperation, a fact

well-known to industry. The result? There has been an unprecedented global push to buy
water rights from governments through water privatization schemes. In these
arrangements, a corporation buys water rights from a government authority, generally
under the roundly disputed argument that the free market is best suited to manage water
resources. In turn, corporations increase the price of water and force people to pay inflated
costs to access their most critical resource.

The chart below is from United Nations Human Development Reports data, which
documents the price of water in New York City (which has public supplies), and cities in
the United Kingdom, Philippines, Ghana and Colombia, which have privatized water

As you can see, privatized water supplies translate into exorbitantly higher costs for
consumers especially in poorer regions. Yet as this UNHDR report was released in 2006
and this situation has gotten worse over time, it's likely these figures understate the
problem. This also isn't the only area where examples of water privatization are common:

An American company by name of True Alaska Bottling purchased rights
to annually extract up to 3 billion gallons of water from Blue Lake in Alaska. It will
be shipped to a bottling facility in India where it will be sold to various countries in
the Middle East at a significant markup.

Due to lackluster investment in domestic infrastructure, public plumbing in

American cities is aging, something the EPA estimates will cost upwards of $300
billion to fix. The result? More than 6 billion gallons of water are lost annually to
leaky pipes. Thanks to the inefficiency typical of todays government, public money
isnt devoted to fixing this problem, so cities such as Chicago, Pittsburgh and Santa
Fe are looking to privatize water supplies.

The Nestl corporation owns 1/3rd of the American water market and 75 springs in
the United States. If youre curious how Nestl does business, consider the town of
McCloud in Northern California. Without public input, McCloud granted Nestl a
100-year contract to pump 1,600 acre-feet of spring water a year (521.4 million
gallons) in addition to an indefinite amount of ground water. This contract will pay
only 8.7 cents per 100,000 gallons it collects. The average person in the United States
pays $20 per month for water, and Nestl will sell this water to market at $10 per

Several states in the U.S. have sold their water rights to third parties, and in turn
have outlawed sourcing natural water from the environment. In Colorado, for
example, it was illegal until 2016 to collect rainwater from the sky, even on your own
property. While this law was changed in response to public outcry, a household is
still restricted from capturing rainwater beyond a volume of two 40-gallon barrels.

According to a 2009 report by the World Bank, investment into privatizing water
supplies is set to double in the next five years, predicting that the water supply
industry will grow by 20%.

This problem is especially severe within developing nations as their governments have
higher corruption rates and are more enticed by short term financial benefits. So by waving
a little cash in the right faces, would-be water barons can purchase water rights for large
regions if not entire nations. This gives private industries free reign to profit from
provisioning water, and today, some of the poorest people on Earth are forced to pay nearly
seven times the price of water than people in developed nations with public supplies. And
if they take it without paying? A crime of theft no different than any other under the law.

All of these factors combine to form a more perilous situation than would be presented by
water scarcity alone, for not only do people in water-scarce regions have to fend off
accelerating desertification, they have to deal with predatory private interests as well.
Framing it like David vs. Goliath doesnt quite say it. Only 8% of total human freshwater
consumption is for residential use, whereas 70% goes to agriculture and 22% goes to
industry. Its been rare cases where the desires of wealthy interests werent met first in
times of scarcity, so when water scarcity hits earnestly it remains highly unlikely that the
needs of ordinary people will be the top priority for governing authorities (see: Flint,

As billions of people live in population centers without the means to migrate to areas with
more plentiful water supplies, this puts them between a rock and a hard place that only
gets worse over time. To say that this presents risks to social order, the global economy and
global security as a whole is a significant understatement, but barring a massive
transformation of how we obtain water it is not a matter of if but of when.

Yet as dangerous as that reality is or what might stem from it, water scarcity is only one
piece of a larger issue, for while water might be our most crucial resource its not the only
resource our civilization requires to operate. That second resource is oil, and we are facing
circumstances where its price will rise to a level where it becomes too expensive for many
economies to afford, making it economically scarce. Of these problems, they do not exist
apart. They exist here and now, and together they are poised to impact our way of life
within the same time period. Should this happen, what damage we might have sustained
from the scarcity of one will be doubled by the other, as will the consequences that come
with it.


Our world as we know it is born from oil. It powers nearly every personal vehicle on the
planet as well as most forms of transportation. Oil is an essential ingredient of synthetic
materials like plastic and rubber, as well as cosmetics, paint, fabric, soaps, detergents,
lubricants in effect nearly every luxury that we have. Simply stated: oil makes our modern
society possible. Without oil, its not.

Our extraordinary reliance on oil makes it indispensable to developed economies. As the

global economy develops, this reliance leads to an increase in global oil demand and also a
higher risk of economic instability if it becomes too scarce and/or expensive to purchase.
Because of this, oil has been one of the most fought-over resources in history. The results of
oil conflicts have shaped much of the world as we know it today, and future concerns of its

scarcity have set the basis for our alliances and defense posture in oil-rich regions (see: the
Middle East).

Today, these concerns are founded deeper than many realize. To be sure, I recognize this
claim might seem hard to believe at first as recent discoveries of large shale oil and natural
gas reserves have given us an abundance of oil, even to the extent where its price has been
cut in half. But this discovery only delayed the problem of future global oil scarcity, and it
wont delay it forever.

Before shale / tar sands oil flooded the market and lowered prices, Im sure you remember
paying $4 for a gallon of gas, which was more than double what we paid 5-10 years prior.
Yet when shale came in, the price dove, and the shale oil industry now says we
have enough oil to power our country for the long-term future.

How is it then that we could be facing any sort of oil scarcity? To explain, it will be helpful
to review some backstory to illustrate our relationship to oil in the past and how that will
shape our relationship to oil in the future.

Its first important to understand the difference between oil from wells in the ground as
weve traditionally extracted it and oil from shale and tar sands thats fueled the new
petroleum boom. So for this discussion well separate sources of oil into two groups,
conventional and unconventional, described as follows:

Conventional resources/reserves. Oil extracted from wells dug in the ground that pump oil
to the surface. This is the traditional way weve extracted oil, and most of our extraction,
processing and refining equipment is geared for this method. [More Information on
Conventional Oil]

Unconventional resources/reserves. Oil that comes from new technology that allows us to
extract oil from areas we couldnt previously: shale rock, tar sands, hydraulic fracking
wells, methane hydrate, etc. This oil needs to be extracted with expensive equipment, is
usually not of the quality of oil sourced from conventional reserves and often needs
significant processing for general use. [More Information on Unconventional Oil]

With this distinction in mind, well proceed from here by going over three concepts:

1. How humanity has used oil in the past, and how this has impacted its price.

2. The price trends oil has taken before the discovery of unconventional oil resources,
and how that was impacted by oil production levels.

3. The costs involved with extracting oil from shale and other unconventional
resources, and why that matters today.

To illustrate these concepts, well need to review a little bit of data and statistics. If youre
someone who looks at charts all day, this will be a breeze. But if you arent, dont worry!
This isnt a science journal (as Im a systems analyst, not a scientist, and I dont expect you
to be either), so each chart comes with explanations of what the data shows and why its
important. So if you find your eyes glazing over, feel free to move down to the Chart
Explanation to get the general idea.

Moving onward, here are some facts in which the issue of oil scarcity can be framed:

Today, the world consumes an average of 96 million barrels of oil a day (abbreviated as bbl.
42 gallons), which translates to roughly 35 billion barrels of oil per year. Worldwide, thats
an 11% increase from 2010, and the United States claims approximately 20% of the world's
annual oil consumption.

While our population has grown 430% since the start of the 20th century, global oil
consumption has grown by 2,000% in that same time period at a rate that generally
increases annually.

We see global oil consumption accelerate at this rate and to this scale because as our
population expands and advances so does the global economy, which comes with the
requirement for more resources. As the increased consumption of oil has made nations
wealthier, they now have money to buy more oil to satisfy their ever-growing needs, thus
continuing the cycle of increased demand and production.

Chart Explanation: this chart shows that as population expands and nations generate wealth, they can
purchase and consume more resources. Note how oil production (purple line) tapers off as 2010
approaches, whereas consumption (green dots) continues to accelerate away from production. Thats
important, and well talk about why in a minute.

We see this trend accelerate over time because the world is simply making more money, as
beyond the established powers of the United States, Russia, and Europe, other nations are
developing their economies and increasing their economic clout on the world stage. With
this clout comes the ability to purchase greater amounts of resources to further advance,
which in turn creates demand for more resources. This ratchet effect creates levels of
demand that are only going to increase, presenting a situation where global oil competition
is not limited to a few major powers, but the world as a whole.

Presently, the Energy Information Administration estimates that global oil consumption
will rise to 119 million barrels per day by 2040, which is a 38% increase from the 87 million
barrels per day the world consumed in 2010. It its International Energy Outlook report, the
EIA assesses that China, India, and other developing countries in Asia will account for 72%
of the net world increase in oil consumption, with consumers in the Middle East accounting
for another 13%.

That might not seem like much to us Americans, but remember that at 324 million people
were barely 4% of the global population. China, India and East Asia have a combined
population exceeding 3 billion people. As these countries and others like them become

wealthier, people improve their economic standing and can start to buy things like cars,
which run on oil.

According to Goldman Sachs, car ownership in most economies gains momentum when
per-capita incomes rise to the $10,000-$20,000 range, and the firm predicts that India will
become the world's third-largest car market by 2025 while the Chinese market will continue
growing. The majority of Indian and Chinese citizens are impoverished today compared to
western standards, and most dont own cars. Figures from the Department of Energy show
the United States has 810 cars per 1,000 people (as of 2012). At a population of 1.25 billion,
India has 24 cars per 1,000 people. At 1.36 billion, China has 81 cars per 1,000 people.

With these numbers in mind, lets assume a scenario where just 30% of China and India
rose to the middle class and purchased vehicles. Should that happen, it would add an
additional 780 million cars to global roads. Thats two cars for every person in the United
States, plus another 142 million on top. Navigant Research, as reported by the Christian
Science Monitor, forecasts there will be more than 2 billion cars on the road by 2025, and
just 2.5 percent will be electric, hybrid or fuel-cell vehicles, with the remaining 97.5 percent
running on gasoline or diesel fuel.

These circumstances involve the fundamental economic concept of supply and demand,
which in a nutshell states that the price of a product rises with demand, both of which are
directly relational to its supply.

The higher the supply and lower the demand, the price lowers.
The lower the supply and higher the demand, the price rises.

Increasing demand has caused the price of oil to rise significantly over time, a rise thats
been halted by the supply of oil increasing to meet or exceed demand, which is whats
happened with unconventional resources temporarily lowering oil prices today.

More data from the Energy Information Administration and the Bureau of Labor and
Statistics break the numbers down from the 1960s-onward in a bit more detail:

U.S. Crude Oil Prices by Decade: 1960-2010 (per barrel)

Chart Explanations: crude oil has experienced price spikes and falls since it was discovered. When we
first discovered it, as a new product it was quite expensive. As technology made extraction easier, the
price dropped and leveled off, and stayed stable (green line is inflation-adjusted dollars) for about 90
years, where it started spiking after 1970.

Adjusting for inflation, the 2013 price of oil increased nearly five times since 1960 and three
times as much since 2000 increases that corresponded to price spikes. The first was due to
the 1973 Oil Crisis, caused by oil-producing nations banding together in a cartel to limit
output and force a higher price. After this spike subsided, however, it shot upward around
year 2000 and continued until unconventional oil resources flooded the market and lowered
prices starting late 2014.

There is a difference between these two spikes, though and its an important one.

The spike in the 1970s was caused by a price-fixing scheme of oil-producing countries. This
is analogous to owners of local gas stations banding together and agreeing to sell gas at $10
per gallon. As theyre the only gas stations, their collusion means one has to buy from them
whether they like it or not. Price fixing is usually illegal, but when its committed by nations
who provide a critical resource the world can only do so much to stop it.

In the early 2000s, however, the price of oil spiked not due to price fixing, but rather with
increasing demand coinciding with a plateau in production based on remaining supply.
Heres how and why this is the case:

Before unconventional oil resources came to something of a rescue in late 2014, oil
production and prices were closely following a predictive trend known as Peak Oil. Peak
oil is a theorized point in time where global oil production peaks and we will produce less
and less each year at proportionally increased cost due to ever-reducing global supply.

The scientific reasoning behind this theory was originally conceived by a geoscientist
named M. King Hubbert, who created models centered around the idea that production
levels directly relate to supply. The theory holds that with any set amount of finite
resources within a given region, once discovery and extraction depletes 50% of total supply
production declines until supplies become depleted outright, raising costs each step of the
way down the bell curve.

Chart Explanations: U.S. conventional oil production peaked in the 1970s due to us reaching 50% of
supply, forcing us to buy oil from foreign sources as our output declined.

As you can see from the above charts, his model accurately predicted the rise, peak and
decline of conventional oil production within the United States. In response to this the
United States simply opened up its checkbook and bought oil from foreign sources, which
as we know shaped much of the global power dynamic we see today.

However, as we discussed previously global oil demand and consumption has accelerated
on an ever-growing scale, which means that reliance on foreign oil in a multipolar world
has become more expensive and trickier in general (see: the list of less-than-savory nations
we count as allies). So that begs the question: what did Hubberts models have to say
when applied to peak oil production worldwide?

Hubberts models predicted that global

conventional peak oil production
would be reached around 2010, which
coincides with oil price increases we
saw between 2000 and 2013. Although
the petroleum industry often disputes
Hubbert's reasoning (more on that in a
minute), it's worth noting that up until
the shale oil boom most independent
global experts agreed that peak oil
would arrive around 2010 as well (plus
or minus a few years). The following

link has an exhaustive list, but for our purposes we'll just look to the United States Joint
Forces Command. In their 2010 Joint Operating Environment report, they concluded peak
oil would be reached as early as 2012, creating a permanent shortage of oil by as much as 10
million barrels a day by 2015. The following paragraph was excerpted from the report:

A severe energy crunch is inevitable without a massive expansion of production and refining
capacity. While it is difficult to predict precisely what economic, political, and strategic effects such a
shortfall might produce, it surely would reduce the prospects for growth in both the developing and
developed worlds. Such an economic slowdown would exacerbate other unresolved tensions, push
fragile and failing states further down the path toward collapse, and perhaps have serious economic
impact on both China and India. At best, it would lead to periods of harsh economic adjustment. To
what extent conservation measures, investments in alternative energy production, and efforts to
expand petroleum production from tar sands and shale would mitigate such a period of adjustment is
difficult to predict. One should not forget that the Great Depression spawned a number of
totalitarian regimes that sought economic prosperity for their nations by ruthless conquest.

As we know, the shale oil boom came to the rescue in 2014 and Hubbert's theory, while
applicable to conventional oil, was temporarily offset by the massive supply discovery of
unconventional oil. However, as oil consumption is now multi-polar and demand is
expected to increase on a global scale with billions of consumers, this will require of us to
place greater reliance on unconventional oil supplies to meet demand in the future.

The Joint Operating Environment report continues:

"To generate the energy required worldwide by the 2030s would require us to find an additional 1.4
Million Barrels per Day every year until then. The discovery rate for new petroleum and gas fields
over the past two decades (with the possible exception of Brazil) provides little reason for optimism
that future efforts will find major new fields. At present, investment in oil production is only
beginning to pick up, with the result that production could reach a prolonged plateau. By 2030, the
world will require production of 118 Million Barrels per Day, but energy producers may only be
producing 100 Million Barrels Per Day unless there are major changes in current investment and
drilling capacity. By 2012, surplus oil production capacity could entirely disappear, and as early as
2015, the shortfall in output could reach nearly 10 Million Barrels Per Day."

To get to 118 million barrels per day in 2030 from the 96 million per day humanity
consumes presently, that would require of us to extract an additional 8 billion barrels of oil
per year, coming to a total of 43 billion barrels annually extracted. On top of the 35 billion
we presently extract and consume annually, that's not oil we're going to be able to reliably
source from dwindling conventional oil reserves. Instead, increasing demand will
eventually make unconventional oil our primary oil source as time progresses.

Source: Energy Information Administration.
Chart Explanations: As oil demand increases worldwide and conventional crude oil depletes, well need
to place ever-greater reliance on unconventional oil resources to source oil.

But what does this look like? Weve been relying on conventional oil supplies since weve
discovered it how might unconventional oil impact our economy and our way of life
should it take the place of conventional oil?

To answer that, well first need to look at our remaining conventional oil reserves.

According to the Energy Information Administration and the CIA (yes, that one), we have
around 1.66 trillion barrels of oil left in conventional proved reserves as of 2015. At our
current consumption of 35 billion barrels per year, that's 47 years remaining. At 43 billion
barrels per year, as the Joint Operating Report assessed we'd be consuming by year 2030,
that's 38 years remaining. However, when we include unconventional resources, which are
estimated to hold as much as 2.7 trillion barrels, that might mean as much as 4 trillion
barrels of oil remaining roughly 40 percent conventional and 60 percent unconventional.
If that number is accurate, it would comfortably supply oil for the next 100 years. 100 years
is a long time, so whats the worry?

The worry is that the question of how much oil we have left has obscured the greater
question of how much oil we are able to extract and produce at acceptable cost.

That is the critical consideration. We might have plenty of oil, but if we cant extract it
inexpensively enough and in large enough quantities the price of oil will have to rise to
meet the costs of doing so. This becomes more prevalent as oil demand increases globally
and our conventional reserves deplete.

It's noteworthy then that this question of cost is roundly sidestepped by the petroleum
industry, which takes strident measures to reassure the world that there's plenty of oil left
and no shortages are imminent, encouraging us to rely on oil for our long-term energy

Saudi Aramco, the largest oil company on Earth, had this to say in 2007:

"We are looking at more than four and a half trillion barrels of potentially recoverable oil. That
number translates into 140 years of oil at current rates of consumption, or to put it another way, the
world has only consumed about 18 percent of its conventional oil potential. That fact alone should
discredit the argument that peak oil is imminent and put our minds at ease concerning future petrol

BP, the British petroleum giant whose oil extraction adventures culminated in
the Deepwater Horizon burning to the bottom of the ocean floor, has published
reports assessing an oil future that's even rosier than Saudi Aramco's. They conclude we
have 4.8 trillion barrels of oil remaining, which would be nearly 140 years at current

Then there's the king of the American oil industry, ExxonMobil, which assesses that weve
only used about 30% of Earths oil reserves. As they continually scale up estimates, Exxon
claims well have oil for the long-term future. The following is excerpted from
their corporate blog:

Is the world running out of oil? No. Not even close. However, the worlds remaining petroleum
reserves do require more complex technologies, and higher levels of investment, than they did a
generation ago. For example, the world is increasingly looking to oil sands, ultra-deepwater, and
arctic resources. ExxonMobil is utilizing advanced technologies to unlock these resources
technologies such as extended reach drilling, which allows us to drill wells targeting reservoirs that
are miles away from the surface location; 3D seismic and electromagnetic mapping methods that
improve imaging of oil and gas reservoirs; and enhanced oil recovery techniques that significantly
increase oil recovery in producing fields.

The oil industry players maintain that the world isnt even close to an oil shortage. And if
we were to take all their words at face value (something history might discourage) itd be
safe to conclude that especially with unconventional oil reserves, we arent going to be
facing oil shortages within our lifetimes, right?

Not so fast.

It's indeed likely that these sources of oil might exist at the levels stated. But as we touched
on earlier, supply is not as relevant of a metric as whether we can access that supply at an
acceptable cost.

Conventional oil (the source that's powered modern civilization as weve known it) is
extracted from the ground by digging a well. And digging wells through 90 feet of Saudi
sand is a whole lot easier than extracting oil under miles of rock, ocean or polar ice:

Indeed, even though ExxonMobil claims that we have plenty of oil they somehow require
more complex technology with higher levels of investment to get it than the kind that
worked just fine for the past 150 years.

Why is this? Because as the majority of Earths easily accessible oil in conventional reserves
is diminishing, they need to get oil from unconventional sources, namely oil sands, ultra-
deepwater, and arctic resources.

Lets think about that for a second. They want to drill in ultra-deep water where pressures
are so great that steel structures crush like paper bags which turned out great in the Gulf
of Mexico. They want to go to polar regions and drill through ice that is sometimes miles
thick to extract oil and in turn transport it thousands of miles across the deadliest land
environment known to man. They want to extract oil from shale and tar sands, which is
inefficient and environmentally destructive. And in order to accomplish any of these goals,
extensive degrees of highly expensive equipment and processes will be required.

That latter issue of expensive equipment and processes is key, because most of the
infrastructure we use to extract, refine and transport oil is geared for conventional oil. With
that in mind, let's ask ourselves a question: when a multinational conglomerate with a
market cap of $367 billion says phrases like more complex technology with higher levels
of investment, how much money do you think theyre talking about? Also, what kind of
external costs might we face once we increasingly shift our focus towards extracting oil
from unconventional resources?

The answers to these questions are significant and stand to impact several important areas
of our economy, society and way of life. But before we go into the financial realities behind
unconventional oil extraction, the first price we pay is through environmental impact.

Although recent memory is littered with examples of the petroleum industrys sketchy
track record of environmental mindfulness when extracting and transporting even
conventional oil, its record darkens substantially with unconventional oil. For example, take
note of the following images that show areas of Alberta, Canada, ruined by oil extraction
from tar sands:

A strip of destroyed landscape from the wilderness / A before and after picture of an
Alberta river valley.

Normal landscape and countryside in Alberta.

The end result of widespread oil extraction from tar sands.

Keeping in mind that were likely in the midst of Earths 6th great extinction event, this one
caused by us, this kind of environmental destruction certainly doesnt help.

But even if we were to embrace our inner cynic and pretend the environment doesnt
matter, the financial impact of unconventional oil extraction is just as damaging. That's
because unconventional oil requires expensive equipment, processing systems and
extraction methods that are not required for conventional oil. Consequently, relying on
unconventional oil comes with tremendous cost externalities.

Chart Explanation: In year 2000, there were only 7 mega oil projects that cost $5 billion or more to build.
In 2012, there were 37 a 520% increase.

This brings up a concept known as breakeven price, which is the price a barrel of oil must
be sold in order for a drilling operation to be profitable. If the price falls too low (like our
recent crash of oil prices due to high supplies of unconventional oil), drilling rigs without
deep pockets must fold their tents and petro states face growing budget deficits as their
primary revenue source is diminished. Yet as conventional reserves deplete and we
increasingly depend on unconventional oil resources to supply growing global demand, the
breakeven price of oil will instead insulate rapid and sustained future price increases.

Chart Explanation: This chart shows how much certain types of oil cost to extract. The breakeven price
for Middle Eastern oil is, on average, less than $20. But for unconventional oil, the breakeven price is 4-5
times higher than conventional oil sourced from the Middle East.

Presently, the breakeven price of a barrel of crude oil from Saudi Arabia is about $10.
However, the breakeven price for a barrel of crude from unconventional resources is far
higher, breaking $80 per barrel in most cases and upwards of $100 for Arctic Resources and
Oil Shales once government royalties, permits, taxes, and other expenses are
incorporated. Thats the price oil must be sold just for these extraction methods to break even. In a
world that has seen oil prices largely set arbitrarily and artificially by manipulating the
supply of easily-extractable crude oil on a global market, the possibility of a day arriving
where we have to depend primarily on unconventional resources should give us a moment
of pause.

Indeed, recalling ExxonMobils reliance on oil sands, ultra-deepwater, and arctic

resources, we see that the breakeven cost for extracting oil from these resources is 800%-
1,000% higher than Saudi Arabias, and 400%-500% higher than the Middle East as a whole.
In the context of rapidly growing global demand for oil with depleting conventional
reserves, this will put oil prices in a pressure cooker.

Once other externalities are considered, the situation worsens. To explain, we'll look at
something called EROEI, which stands for Energy Returned on Energy Invested. It's the
ratio of how much energy is delivered compared to how much energy is required to extract
that energy. According to an article published in the Royal Society of Chemistry,
conventional oil production has an EROEI around 10-20:1 (we'll assume 15:1 for our
purposes). That means conventional oil production delivers fifteen barrels of oil for every
one barrel of oil invested as energy for extraction.

With unconventional oil, tar sands has an EROEI of 3-6:1 and oil shale is around 1.5-4:1.
(For those inclined, this data is independently supported by another article in Forbes
magazine that's worth reading). This means that where we once had to invest one barrel of
oil in energy to extract 15, with tar sands it may be as low as one for every three, and one
for every two with oil shale.

If we were to assume an EROEI ratio of 3:1 across the board for shale and tar sands, that's
an efficiency loss of -500% compared to conventional oil. The processes necessary to extract
unconventional oil also consume lots of water. According to the RAND Corporation, about
three barrels of water are required to extract one barrel of shale oil, and the same is true
with extracting oil from tar sands.

With this ratio, shale / tar sands operations producing 3 million barrels of oil a day would
require 378 million gallons of water per day and this is water that remains tainted with
toxic chemicals, making it unsuitable for consumption afterwards. In the context of global
water scarcity this is problematic to say the least, and in the context of global oil supplies
this degree of inefficiency will do little to moderate price increases.

Furthermore, once extracted oil from these sources is usually not of ideal quality, meaning
it must be heavily processed for use as a conventional oil stand-in. The same is true
of methane hydrate, large frozen methane deposits at the ocean floor that can be processed
into fuel. But processing in this context is no easy feat, an expensive obstacle that stands
in addition to yet another complicating factor with oil extraction from unconventional
resources: location.

Even if unconventional resources have enough oil to meet global demand, many if not most
exist in places that are geographically remote and environmentally inhospitable to
mechanized equipment. Accordingly, any infrastructure necessary to extract and refine oil
from these resources must either exist at those locations, or at a place extracted oil can be
shipped to and from. This presents daunting logistical challenges and infrastructure
requirements, all of which are highly expensive. Cost externalities continue further in the
form of increased and unexpected maintenance of machinery, cost of materials,
transportation of personnel and equipment, housing and labor etc. each with its own
built-in multiplier effect as distance from extraction to processing to sale increase.

Oil pipelines cost millions of dollars per mile to build the Keystone XL pipeline
was estimated to cost $10 billion as of 2015, and would have added an annual oil
transmission capacity of just 303 million barrels. Thats $10 billion to transport 0.86% of
humanitys annual oil consumption. Oil tanker ships cost billions of dollars to build and
only certain types of ships can access hostile environments without sharing the fate of

the Exxon Valdez. Fleets of oil tanker trucks cost billions of dollars. Fleets of excavation
equipment cost billions of dollars. The maintenance, operations and fuel for all the above
are added on top. And so on.

Once the costs of logistics, operations, equipment, extraction, processing, labor and
transportation are combined with the fact that unconventional oil extraction is far less
efficient than conventional oil, it becomes clear that it's tremendously more expensive and
has deep-seated cost externalities most every step of the way.

Even today, Russia needs to invest $100 billion annually into their oil industry to extract
arctic oil as their conventional reserves are depleting. At 143 million people, Russias
population is only 2% of the planet. And the Organization of Petroleum Exporting
Countries (OPEC the main player in the global oil business), assesses that we will need to
spend $10 trillion into oil-related investments to satisfy global energy demand by 2040.
They state further that oil and gas will need to supply 53 percent of global energy at that

Let's look at that one more time: According to OPEC, the world needs to invest $10 trillion
to produce 110 million barrels per day by 2040. The Energy Information Administration
says we'll need 119 million barrels produced per day by 2040, and United States Military
Joint Forces Command estimates the world will need 118 million produced per day by 2030
so it's quite likely OPEC's demand projections might be on the low end, making any
investment cost proportionally higher.

Even as-is, if geared towards unconventional resources (which it must nigh certainly be),
that's a $10 trillion investment in an extraction process that's so inefficient it uses one barrel
of oil in energy for every 3 produced, requiring another 3 barrels of water on top. In a world
where we're facing unprecedented water scarcity. In a world where oil price dominates
economies, our conventional reserves are depleting and the breakeven price of
unconventional oil extraction ranges between $80-$120 per barrel and sometimes even
higher. And we're going to rely on that model to satisfy growing future energy needs? A
whopping 53% as OPEC predicts? Simply. Delusional.

So of course the petroleum industry would be in the business of touting unconventional oil
supplies as a solution while sidestepping extraction cost. It's in their best interests for if oil
becomes more expensive they make more money. So as they promise us that oil will be
plentiful for the long-term future, they simply omit the unavoidable reality that once we
depend on unconventional oil to supply our growing energy needs, the price of oil will
strap on rocket boosters and shoot to the moon. I mean, why would they care? After all, it's
we as a people who ultimately pay the bill. And therein lies the rub.

Petroleum interests don't care if oil becomes more expensive, nor do they need to, because
they're acutely aware of the fact that oil is essential to our society and we have to buy it
from them whether we want to or not in order for our economy to function.

This introduces a concept known as demand destruction, which is an important

consideration in the context of oil demand in relation to its price. With most commodities, if
something gets too expensive, people will simply stop buying it or look for cheaper
alternatives. In essence, demand for that specific commodity is destroyed. Yet oil in our
society is different from other products because its indispensable to vital social functions:
food production, manufacturing, transportation, chemical engineering, defense, etc.

People cant choose to stop buying gasoline because they need to get to work. People cant
choose to stop eating if oil prices make food more expensive. Our economy can't just choose
to stop working if oil makes manufacturing and transportation too expensive. Because its
so critical and its use is so widespread, demand destruction cant apply earnestly to oil over
a short or even medium-term time period. While sustainable alternatives to oil exist,
switching to them will require a large jump that cant be taken overnight nor can it be
taken swiftly in an economy dominated by skyrocketing oil prices.

Sure, theres natural gas and electric vehicles but we cant wave a magic wand and
affordably retool hundreds of millions of vehicles to run on these energy sources. Nor can
we expect hundreds of millions of people to spend tens of thousands of dollars to buy
electric cars. And none of this says anything about overhauling millions of fuel stations
nationwide, nor replacing oils role in the myriad industries it is presently indispensable to.

Conservation efforts are great ideas but they just delay the inevitable until a sustainable
alternative can be rapidly implemented on a large scale. Barring that, we have little choice
but to continue relying on oil to power our civilization effectively chaining us to oil and
what consequences are brought by its ever-increasing price and scarcity.

So even if in the best-case scenario we were able to rely on unconventional oil resources to
satisfy global demand, the costs associated with doing so will still cause the price of oil to
rise beyond the ability of many societies to afford it making it little different than if we ran
out of oil altogether.

While America might be immune in the short term from the consequences of that
eventuality as were producing lots of oil and perhaps could increase subsidies, the price of
oil is still set by the global market. Once oil prices get high enough, the resulting economic
damage will still take place. Which brings us back to water.

By themselves, the impact of water or oil scarcity is disastrous. However, what danger is
brought by either problem increases dramatically when they are combined, and should the
full impact of that convergence be felt in todays world it would threaten the existence of
civilization as we know it.

Thats because as the resources our advanced societies require to operate are running low,
societies will eventually be unable to continue operating should this problem remain
unsolved, causing cascading consequences to modern life. Beyond the humanitarian crises
that would arise as a result, this would also bring about unprecedented levels of unrest and
conflict that would eventually reach global scales. And in the nuclear age, global conflict is
an extinction-level event.


The primary cause of conflict between states is resource scarcity and the economic damage
caused as a result and never before has humanity faced a resource crisis as severe as the
one we will be soon be facing. Accordingly, once water and oil become scare and/or
expensive enough to damage economies, nations will be placed under increasing pressure
to secure or defend their supplies of both through military action.

In March 2015, Russia conducted a massive military exercise in their arctic territory
involving some 80,000 troops, 220 planes, 41 ships and 15 submarines. The exercise
surrounded the protection of vital arctic resources, of which Russia has significant holdings.
This exercise was conducted by Russias new Arctic Joint Strategic Command, created in
December 2014 specifically to deploy to northern latitudes.

It maintains multiple bases in the Arctic Circle, that so far away from any other national
border clearly serve a defensive and/or expeditionary role:

Russia is able to fortify itself in this position because its a powerful country with a highly
capable military and thus can secure resources through hard power. Other nations are not
so lucky. In the face of growing scarcity, those lacking resources and the ability to
affordably acquire them will face internal unrest. Whether directly or indirectly, much of
the Middle East today already reflects that result.

For resource-rich nations that can avoid unrest yet cant defend their resources via hard
power, they will eventually face greater risk of invasion or pillaging through hostile
economic takeovers (as told in John Perkins Confessions of an Economic Hitman). Of the
resource-rich nations that wisely fear this result and still have the means to avoid it, they
will seek greater alliances with more powerful nations that will come to influence their
domestic affairs.

But these allegiances will not deter hostile military action forever, as increasingly severe
resource scarcity will inevitably force nations to invade others to secure water and oil for
the continued operation of their society. Indeed, as upwards of 65%-90% of China's natural
water supplies are polluted, how might you predict they're going to acquire water for 1.36
billion people? Hint: it won't be by way of holding a bake sale. Rather, as history shows
clearly and nigh uniformly, it will be by way of military conquest. Yet in the paradigm of
modern warfare and intercontinental military alliances this degree of conflict eventually
sparks global conflicts that eventually transform into worldwide war in this case the third
manifestation of its kind.

Its difficult to assess exactly when resource scarcity will grow dire enough for this to
happen. Lots of factors are at play in our world, some aggravating and some mitigating,
and how they dance on the timeline of our future can change its course significantly. But
regardless of what date proves accurate the fact remains that unless we shift the basis on
how humanity interacts with our planet and acquires resources, our rapidly expanding
population will eventually bring us to the day where there are not enough resources to
satisfy our needs with armed conflict being the end result.

Yet even though this reality is both probable and well-supported by our history, for sake of
argument well consider another possibility: that global leaders refrain from waging war,
realizing that military victory will only gain a temporary supply of resources at best.
Instead, well assume they choose to maintain peace while seeking to also maintain the
status quo with incremental progress towards mitigating resource scarcity. Well consider
that possibility, however unlikely it may be, because its important to emphasize that if
large-scale action isnt taken soon the fatal consequences of resource scarcity will eventually
arrive regardless, as will global conflict.

Heres why this is the case. When conventional oil reserves diminish amid growing
worldwide water scarcity and rapidly increasing global demand for both, their price spikes
exponentially. As the price of water and oil spikes, so will the price of every product made
with or delivered by them. While most items, processes and services would see a huge cost
increase, the greatest impact to the average person worldwide will be food.

Most of us do not grow crops, raise and slaughter animals or hunt for our food. With few
exceptions, we buy food from the grocery store. And to most people, thats all weve ever
known. We have become eternally dependent on this system of societal convenience to
sustain ourselves, and we are largely helpless without it. Yet as with any commodity sold in
a market, we purchase our food with money.

Consequently, the availability of food is proportional to its cost, and the price of food
depends on the price of water and oil and theyre essential to food production.

Food is grown with water and cultivated by machines that run on oil. Food is processed
and packaged with materials made from water and oil by machines that run on oil. Food is
delivered to store shelves by trucks that run on oil, and is purchased by people who get
there in cars that run on oil. The cost of food production incorporates the price of water and
oil every step of the way, so when they reach a certain level of scarcity food will become
unaffordable for large segments of global society.

This is not a far-fetched prediction. Many of us are unaware of the current hunger statistics
and they are staggering. Globally, nearly half the world lives on less than $2.50 a day.
Roughly 1 in 9 people worldwide do not have enough food to live healthy lives,
and according to UNICEF roughly a billion people entered the 21st century unable to read a
book and sign their names equivalent to the entire human population just 100 years prior.

Yet were not really that much better off domestically in this area, either. The United States
has among the worst rates for poverty among all industrialized nations (OCED), as 15.1 %
of all Americans roughly 1 in 6 and 1 out of 3 American children are impoverished as of
2015. The childhood poverty rate today is 73% higher than it was in 2006, roughly 40% of
American adults can expect to live below the poverty line sometime in their lives and half
of American children will receive food assistance from the government before they turn 20.

Source: UNICEF

While it's true the term poverty is relative and people living under the poverty line may
still have a working television, electricity and running water, many simply are financially
unable to afford food at a sufficient level to feed their families throughout the year and
require government assistance to do. Although the causes to their poverty may be varied
and debatable, the underlying fact remains that once resources become significantly more
expensive the impact and scale of poverty nationwide saying nothing of global society
will get much worse.

To explain how, lets look at the American household income versus cost of living for
essential expenses.

The median income for an American household before taxes is $53,657 as of 2014. The
nonpartisan Tax Foundation estimates the average total income + payroll tax burden is
31.5% nationwide. Although comparing average tax rates to median incomes can get tricky,
most income statistics in the U.S. are in median figures (as average incomes are weighted
higher due to income disparity (1% v. 99%), and tax rates vary wildly by location and type
of employment (self-employed earners generally pay significantly higher taxes)).

If we were to compare this 31.5% figure to median income, that would bring the after-tax
net income to roughly $36,750 a year. It's unclear to what extent local, municipal and state
income taxes are included in this figure, the same with other local taxes such as property

taxes. The Social Security Administration estimates a significantly lower after-tax median
income, coming to $28,851 for fiscal year 2014.

As wages, taxes and costs of living vary by year, occupation and location, we'll split the
difference between the Tax Foundation and Social Security Administration's assessments
and come to $32,800 for the estimated median U.S. household income after taxes. Over a 52-
week year, that's roughly $630 per week.

With that in mind, consider the following:

According to the USDA, the average weekly cost to feed a family of four ranges from
$146-$289 depending on how frugal or liberal the family's budget is. For our
purposes, we'll split the difference and come to $218 per week. That's 35% of the
median weekly income.

According to the U.S. Census bureau, the median rent payment is $847 a month
($195/week). According to Realtor Magazine and LendingTree, the national average
mortgage payment is $1,061/month ($245/week). These figures respectively represent
31% and 39% of the median weekly income.

According to the Milliman Medical Index, the average total healthcare expense for
a family of four is $25,826 a year, which includes both employer and employee

Source: Milliman Medical Index (MMI)

In their 2016 report, they assess that the average employee payroll deduction is $6,717 for
healthcare contributions (which is generally not taxed as income), with an average
employee out-of-pocket expense of $4,316 (which would come from taxable income). At
$83/week, that's 13% of median weekly income but that only covers full-time
employees with employer-sponsored healthcare.

Should that family not receive healthcare from their employer, or should they be self-
employed, their annual healthcare premiums can exceed $17,500 according to National
Conference of State Legislatures. At $337/week, that's 53% of the median weekly income.

While fuel and energy costs vary by region and climate, Americans still devote
significant portions of their income to this expense. According to the Energy Information
Administration, the average household of four pays $1,962 a year for gasoline (which is
lower than normal due to the shale oil boom). Depending on the energy source, the EIA
estimates that Americans pay between $578-$1,392 in annual heating costs (we'll assume
$800 for our purposes). Adding in an average of roughly $1,369 per year for electricity,
and we come to an approximate total of $4,135 annually for a household of four's energy
costs. That translates to roughly $80 per week, or 13% of the median weekly income.

Now let's add the average costs of food, rent/mortgage, healthcare and energy together
and compare them to median income:

With employer-sponsored healthcare: these expenses come to 92% of median household

income if they rent and 100% if they own their home.

Without employer-sponsored healthcare: these expenses come out to 132% of the

median household income if they rent and 140% if they own their home.

And thats before any other costs of life: clothes, car payments, telephone, internet, student
loans, luxuries, etc., are applied.

In short: life already costs more than the median wage earner can afford, causing them to cut some
form of these essential services to get by (which is a contributing reason why so many
Americans lack healthcare and why the childhood poverty rate has increased so sharply).
And keep in mind: these are median figures half the country makes less than that,
and one in four households makes less than $25,000 a year before taxes.

From the charts below, we see that the costs of these essential services are accelerating
higher than the rate of inflation (using the Consumer Price Index as a benchmark), thus this
situation is getting worse every year.

Chart explanation: these two charts show that the price of essential goods and services have risen faster
than the rate of inflation (Consumer Price Index). The median family income has stayed flat as inflation
has increased, but college tuition and fees, housing, healthcare and prescription drugs have all risen in
cost beyond that. That leaves less money in the hands of consumers, who in turn have less money for
discretionary spending.

In addition to greater debt and costs of living in general, this has stressed our economy to
dangerous levels. As a result, the median wage earner (or anyone making less than them)
does not have a buffer to sustain a significant price increase in any essential resource let
alone multiple resources.

Already, analysts are predicting future food prices will at best stagnate (Bloomberg),
whereas others estimate food prices will continue to rise due to environmental problems
and growing demand (British Government, World Bank, National Geographic). Add in
increased resource costs, and food prices naturally rise higher.

Applying this to the figures above, you can see how even a 10% increase in the price of food
and resources would severely impact our society. When the numbers rise to 20%, 30% and
upwards beyond even that, it remains clear how a sharp price increase in essential
resources can place our society in financial peril.

And that's in the United States, the rest of the world will be far less rosy. A billion people
are already malnourished today and 80% of the world lives on less than $10 a day. So when
resources become scarce and their prices spike skyward alongside ever-increasing drought,
poor crop yields due to climate change and the mass overfishing of the oceans, it's difficult
to conclude this leads to any other result besides widespread famine on a global scale.

Then it gets worse.

As the consequences of accelerating resource scarcity take their toll, their collateral damage
will grind economies to a half. That's because when the price of non-essential goods rise to
a certain level, people stop buying them if they can (demand destruction), and if people
stop buying things, companies wont order products to stock shelves. When companies stop
ordering products, manufacturers stop making them and if companies stop making things
then they stop employing people.

This will amplify economic woes and perpetuate an economic tailspin that will increase
poverty and further prevent people from being able to purchase essential items.
Consequently, nations will be faced with increasing levels of economic depression and
social unrest, leading to circumstances that risk collapsing social order and ultimately
society itself.

The majority of us live our lives largely unaware of the resource requirements of our society
and the factors that impact them. Weve also grown up expecting similar (or better)
economic conditions than our parents. Few of us have prepared for a major increase in the
price of food or resources, and fewer still have prepared for the economic damage that will
be inflicted by their growing scarcity.

This contributes to a situation where as social actors, human beings function as sheep (if
you would forgive a crude analogy that can apply to large groups of people). And to quote
former Army Lieutenant Colonel Dave Grossman, an authority on the psychology of
violence (even if overzealous), sheep have two speeds graze and stampede, and
stampede is exactly what we do if we find ourselves starving and desperate.

There are few lengths people wont go to feed themselves or their families, so when they
start to feel true desperation, desperation leads to desperate measures leading to social
revolt. Indeed, the Arab Spring was fueled largely by food prices, and the same is true of
the Syrian civil war. Applying this to the world as a whole, in addition to causing long-
simmering tensions such as income inequality, nationalistic resentments and ethnic
differences to boil over, it risks creating a critical mass of global unrest that may prove
impossible to stop.

In an interconnected world fueled by social media, once the spark is lit that ignites revolt,
modern communications give people the ability to outmaneuver state security forces and
severely limit their response capabilities even in authoritarian states. If we remember back
to Egypt during the Arab Spring, Hosni Mubarak's security forces maintained sophisticated
secret police and surveillance infrastructure with far fewer legal restrictions than western
security services. The same was true of the security forces of Tunisia and Syria. When the
revolution began, many governments and intelligence services were caught off guard at the
speed in which people were able to coordinate efforts to outflank state security forces, and
some predicted the revolutions would lose. Decentralized mobile internet makes this
possible, and as it's difficult to shut down in totality (especially if your economy requires it),
it's a tool that will only become more potent over time.

Then there's the issue of logistics. As we saw in Japan after the 2011 Tsunami, the northeast
after Hurricane Sandy and Gulf states after Hurricane Katrina, a ravaged countryside can
render a government powerless to deliver even basic necessities to people and these cases
had the luxury of a cooperative populace in a developed nation. Providing resources and
vital services to millions of people daily in a well-functioning society is challenging, and
doing so to 7.4 billion people across the planet is that much more so even in ideal cases. So
if much of the world is in uprising, maintaining social order and regional stability will
become effectively unachievable.

There are multiple potential flashpoints for this to happen, as it is already in much of the
world today. The massive unrest in the Middle East/North Africa is one example (take your
pick where). The brinkmanship between India and Pakistan is another, and both nations
stand to lose badly with climate change and resource scarcity. But perhaps the most
consequential flashpoint to the United States is Europe. While the American economy is
improving in certain regards, Europe has been less fortunate. Much of Europe is already
experiencing mass social unrest due to high unemployment rates among youth, and several
European countries are seeing progressively more violent clashes against security forces.

Due to the European Unions policy of open borders and work visas for citizens of member
states, as global conditions deteriorate the migration of individuals from poorer nations
towards those with healthier economies has accelerated, increasing social and ethnic
tensions. The exit of Great Britain from the European Union is exhibit A of this, as are the
most recent riots in France that have seen hundreds of thousands of people take to the
streets over labor law changes and austerity measures.

This situation is especially delicate as Europe faces an unprecedented influx of migrants

fleeing conflict from North Africa and the Middle East. Turkey is already at the breaking
point from Syrian refugees of which there are around 2.5 million in country, and
European countries are struggling to admit refugees numbering in the hundreds of
thousands to low millions.

The humanitarian crisis presented by these refugees should be in no way discounted but
their numbers are a drop in the bucket in terms of those who will be displaced by future
resource scarcity. Today, 65 million people are displaced worldwide nearly 15 million more
than 2013. According to the UNHCR, 34,000 people are forcibly displaced every day as a
result of conflict and persecution. Across a calendar year, that adds up to 12.4 million.
As 85% of humanity lives in the driest half of the planet, it's hard to imagine how the future
scale of water, oil and eventually food scarcity and the conflicts that come with it won't
displace hundreds of millions of people.

With that said, what do we suppose happens when a region of 200, 300, 500 million people
runs desperately low on water or food? Do they just roll over and perish neatly to the
convenience of western opulence? Or do they go the route of the Syrians before them and
migrate to survive by whatever means necessary?

This video documents a second-per-day snapshot of the life of a Syrian child at the start of
their civil war, except filmed in a westernized setting. Although its producer, Save the
Children, is no doubt seeking to strum a heartstring or two for greater donations, keep in

mind that not only is this video accurate, its the G-Rated version of what thousands of
Syrian children experience on a daily basis.

Put yourself in their shoes for a second and imagine what you'd do if that were your family,
your loved ones, your children. More specifically, what would you not do to save them from
that? Now multiply your answer by half a billion people.

So what happens once they show up on Europes doorstep? Its functionally impossible for
Europe to support refugees numbering in the hundreds of millions. Even if European
countries wanted to and they definitely do not want to they simply lack the means as a
matter of math. Yet the only way to force that many people to leave is by force morality be

As they have nowhere to go, standoff ensue until a shot is fired from some nervous
teenager in a uniform. And once live ammunition is used on refugees, they turn into
combatants, powering the downward spiral towards greater conflict and collapse.
Established immigrant communities will face greater degrees of social oppression, and
those subject to it will become more radicalized, further increasing tensions across global
society. Amid the conflict, unrest, collapse and terrorism that spawns from these
circumstances far beyond what we're already seeing today it's hard to see this not
leading to a worldwide social and economic meltdown.

Should regional destabilization and economic collapse claim greater sectors of the global
economy, especially in the case of Europe, the ensuing shockwave will eventually reach our
shores. As Americans, we do have a breadbasket thats protected by two large oceans and
an unquestionably dominant military but theres no way the global economy survives when
much of the world is facing this scale of affliction. So when the global economy crashes,
ours does as well.

Considering the state of American hunger and the average American paycheck, when
resource scarcity causes the price of food, fuel, clothes and most other items to skyrocket as
our economy crumbles, the causes behind the collapse of poorer nations will inevitably do
the same for us, save some dramatic last-ditch effort.

But its difficult to see that effort coming. Were $19.5 trillion in debt and our government is
both broken by partisanship and bought by special interests. Those who hold faith it will
come to its senses in the 11th hour to solve this problem predicate their belief on our
government uniting to do essentially the exact opposite of what its been doing for the past
50 years. If one would like to bet our future on this hope it is of course their right to do so
but Id steer them clear of offers to buy bridges in New York City.

It would be wishful thinking to believe we'd have the leadership and resources to buy our
way out if the dominos really start falling. So if they do, there might be no way to
extinguish the flames. Our military might may hold us out for a little while, but again,
were only 4.3% of the human population. Our time will eventually come.

There are of course the optimists (see: tragedy of the commons) who would cast this
argument off as end is nigh bantering, and point to millennia of warfare, the dark ages,
the black death, two World Wars, the Cold War and so on as examples of monumental
human triumph over seemingly insurmountable challenges.

Indeed, as humanity has gotten along just fine without oil and with conflict for thousands
of years, many conclude that once resources become too scarce or expensive, after a few
years of social unrest, mass die-offs and military campaigns for resource acquisition well
just regress to some sort of degraded state that runs on coal or another fossilized resource
like natural gas.

But their points, while arguably valid, are trumped by the height of the stakes. The bigger
we are, the harder we fall. If large sectors of global civilization were to collapse in the future
were not going to just pick up the pieces and rebuild, holding hands while
singing Kumbaya along the way. Rather, due to the advent of the thermonuclear device it
may likely be our extinction event.

Two atom bombs ended World War II, the largest conflict in history. The bombs dropped
on Hiroshima and Nagasaki were respectively gun-type and implosion-type fission devices,
and their explosive yields were 16 and 21 kilotons (thousands of tons of TNT). They were
roughly the size of a minivan and weighed several tons, which required their delivery by
large bomber aircraft.

Todays modern nuclear weapons are thermonuclear fusion devices (hydrogen bombs),
which have explosive yields in the hundreds and even thousands of kilotons. They weigh
only a few hundred pounds and are the size of a suitcase yet are still easily capable of
wiping any city on Earth off the map.

These bombs are fashioned into warheads and placed within missiles like bullets in a
revolver, so that one single missile can carry up to 12 warheads (depending on treaty
limitations). When fired on an enemy nation these missiles escape Earths atmosphere and
break up mid-flight so that each warhead can guide itself down to a different target, and
upon detonation burn its surface to glass.

A common misperception is that only the United States and Russia have the capability to
launch weapons of this magnitude. Sadly, that is untrue. The UK, France and China have
them as well in smaller forms, and India and Pakistan are actively looking to upgrade their
nuclear missiles to have this capability as we speak. At the very least, each of these nations
(alongside Israel it is widely assumed) maintain the capability to launch nuclear-armed
missiles within a strike range of 2,000 miles or greater with yields exceeding the 100 kiloton
threshold. At 500% the power of the bomb dropped on Nagasaki, that's enough to level any

city on Earth. Every nuclear power outside of North Korea and Pakistan also has nuclear-
armed submarines that can launch missiles from any location they can travel to by sea a
limitation that Pakistan is also actively seeking to remove.

There are approximately 4,000 cities with over 100,000 people worldwide. There are more
than 15,000 nuclear weapons in the hands of these nine countries with thousands of them
in missiles that can hit anywhere on the planet within 30 minutes or less. Do the math.

Even if we could avoid a global conflict among near-sized powers or avoid the use of
nuclear weapons if one was sparked, this risk of nuclear destruction is still high. Thats
because the regionally destabilizing effects resulting from social collapse present
circumstances where nuclear arsenals become compromised. This is all the more true in
nuclear-armed countries like Pakistan, which has an unstable government infiltrated by
extremists, and not all nuclear weapons worldwide use Permissive Action Links.

Should a warhead be stolen and provided to a rogue organization with an agenda to

weaken a larger nation, all that would be required to smuggle it into a city would be to hide
it in any one of the millions of shipping containers that enter the worlds harbors daily. If
even one nuclear weapon were detonated in a major city, even beyond our shores, it would
crash the global economy and put the free world on lockdown (civil rights after that? Good
luck). And if that city was within a nuclear-armed nation, its impetus for restraint against
retaliating with nuclear arms would be at the very least tried.

Its true that this result is not a certainty, and it remains possible that in the face of loose
nuclear warheads or a nuclear exchange between warring nations nobody will ever deploy
them. This is a belief shared by many, who hold that regardless of the circumstances
nobody would ever be crazy enough to use a nuclear weapon. And perhaps they are right.
Maybe the Mutually Assured Destruction theory is always correct, 100% of the time, and
there are no exceptions to it what-so-ever.

Yet nations have nukes for a reason. So maybe they are wrong and it only takes one
instance to prove that theory false. Are you willing to bet the future of our civilization on
the hope that theyre right? I cant say in good faith that I am. But even so, that question is
merely one of severity, as the destructive results of global resource conflict will still
manifest regardless of whether or not nukes or even world war enter the equation.

We still have billions more people than we can realistically support in a world with rapidly
depleting resources, and theres no running away from that fact. So even if we avoided
taking such measures the destruction wrought by social revolt and resource conflict
alongside the famine, disease and death that comes with them would still be irreparable.

We wont have the resources to rebuild society with, nor the resources to retool our
economy and infrastructure to run on what remaining fossil fuels we have left. We will
have ruined whatever chances we had at finding a sustainable energy technology to
provide our resource requirements worldwide, and even if we did eventually find it we
will devote so much of our efforts to fighting each other over it that we wont have a chance
to see it implemented.

Humanity has limitless brilliance and we have the capability to solve nigh any problem. But
this is a problem that our long-term survival depends on us solving. The uncompromising
reality of math and human nature shows that the ultimate result of an exponentially
expanding population on a planet with finite resources is resource scarcity, and resource
scarcity leads to resource conflict. Thus unless this problem our problem is solved on a
global scale, one day the critical resources we depend on will become too scarce to power
our way of life and we will begin fighting over them with ever-increasing intensity.

Should that come to pass, in the best case scenario global strife and war will consume our
future and we will see misery, suffering and death on scales unrivaled. And in the worst?
This problem will set fire to our world, and it will burn our civilization to the ground.

This is the reality of our present circumstances. But it's also a reality that can change. Since
the beginning of our time weve been dominated by resource scarcity. Its fueled nearly
every horror thats blighted our past and it has continually held us back as a species. Now
that it comes to once again destroy what we have collectively built, our primary focus
should be its defeat and removal from the human condition. And for the first time in our
history, we now have the means to accomplish that very goal. Today, we can end resource
scarcity with finality so that we can be unchained from its restrictions and empowered to
devote our full attention towards advancing our world and the lives of those within it.

This ability is made possible through recent technological breakthroughs that enable us to
achieve goals that were once technically unfeasible. Occurring within advanced energy
technologies, computer processing and modeling, material sciences and the precision and
automation of manufacturing, these advances expand our capabilities to generate energy to
new heights. By selecting the most capable energy technologies we have available and
deploying them together in a framework as a team they can become greater than the
sum of their parts by interweaving the strengths of one technology to support the
weaknesses of another.

This proposed framework of cogenerating energy technologies is called Universal Energy,

and its purpose is to inexpensively generate enough energy on a large enough scale to

sustainably synthesize the five most critical resources our civilization requires: water, food,
electricity, fuel and building materials.

By design, this framework is intended to make resource acquisition a positive sum game
no matter how much of the pie is consumed, it will always produce more, faster than the
rate of consumption. And as the technologies behind Universal Energy already exist and
have been proven to work, they can be open-sourced for sustainable deployment anywhere
on the planet.

This will allow the entirety of global society, regardless of economy, location or climate to
produce as many resources as they could possibly require by their own hand. Solving
resource scarcity is the core solution to the core problem of our way of life, the core solution
on which all others may grow. For solving this problem, at the root of it, solves nearly all
problems. We are all in this together, and I believe that if we concentrate on what really,
actually matters, we can show ourselves that we have more stake in our future than we
realized, and that we can build it brighter than we had ever thought possible.

Source and Citation Policy
Universal Energy is a writing based on facts as they are, not as how one ideology would
prefer they be told. Establishing these facts as such requires an extensive degree of research
and citations of the work of other researchers, scholars, think tanks, news reports and
government services, much of which is virtual in nature. As such, facts are cited inline in
the form of direct hyperlinks that can viewed in real time, rather than having to go to a
library, finding the book, finding the page, and then verifying if the citation matches up
with the data.

Inline citations are not only provided for ease of rapid verification, they're also provided for
readability, as highlighted text indicating an external link (all of which open up in new
browser windows) is much easier to read than AP/Chicago citation styles, which put author
information (lastname, firstname, title of book, page number) directly within sentences or
within footnotes / endnotes.

As this writing covers a large scope of material, maximizing readability is of paramount

importance, thus I've opted to adopt a citation style reflecting of that. The reason laypeople
don't read academic journals is they're nigh impossible to read unless one is a professional
academic; my audience is the collective and thus the citation policy of Universal Energy is
geared for the convenience of the collective above all else.

In practice, the following citation conventions are applied:

1. Short-form articles are directly hyperlinked and open in new browser windows.

2. Whitepapers, data PDFs, etc., are cited with page numbers directly embedded in the
URL. For example:, where p53
equals "page 53." As the "#" modifier in a URL points to a named link, the PDF file
will load fine with that URL. But by seeing that #p53, you know to scroll to page 53 to
access the data in question quickly and easily (same as #p20 points to page 20, and so

3. For whitepapers and data PDFs of a short or self-explanatory nature, or documents

designed to provide 'general conceptual information' as opposed to a specific factual
citation, no page numbers are included in the URL structure as they are not

4. For internal citations (previous chapters of Universal Energy), named anchor links are
strategically embedded in the source code to allow for easy retrieval of information.
Clicking these links will automatically take you to the information in question.

If you find a citation that you believe to be 1) broken (dead link), 2) unclear, 3)
misinterpreted, please get in touch and let me know so it can be fixed.


Universal Energy cites facts from a wide spectrum of sources under a transparent
methodology, described as follows:

1. Government Sources: these sources include both domestic and international

government agencies (Bureau of Labor and Statistics, Environmental Protection
Agency, United Nations, World Health Organization, etc.). This writing considers
these sources reliable and factual unless cause is presented to believe otherwise.

2. Scientific / Technical Media: these sources include media outlets dedicated to

scientific / technical research (Scientific American, National Geographic, etc.). This
writing considers these sources reliable and factual unless cause is presented to
believe otherwise.

3. Academic Sources and Journals: these sources include university publications

(, etc.) and academic journals (Nature, etc.) and press releases by
university staff. This writing considers these sources reliable and factual unless cause
is presented to believe otherwise.

4. Flagship Journalism: these sources include media outlets with an established

pedigree (Associated Press/Reuters, Washington Post, New York Times, Wall Street
Journal, USA Today, Newsweek, TIME, CNN). Even if various editorial boards have
a known ideological slant (New York Times tends to shift left, whereas Wall Street
Journal tends to shift right), they maintain a high degree of integrity in terms of
factual reporting and commitment to issuing public retractions in the event they
misstate facts. As such, this writing considers these sources reliable and factual,
unless cause is presented to believe otherwise.

5. Smaller-Circulation High-Brow Journalism: these sources include media outlets

with a high degree of ethical integrity in journalism, but are smaller circulation. They
include Forbes, Fortune, Slate, Wired, ARS technica, CNet, local newspapers, etc.
Although they may have an ideological slant (Forbes and Fortune slant right,

whereas Slate and Wired slants left), their pedigree in ethical and factual reporting
remains high. This writing considers these sources reliable and factual, unless cause
is presented to believe otherwise.

6. Ideological Mouthpieces: these sources include broadcast platforms for a known

political ideology (Fox News, National Review, MotherJones, The Nation, The Blaze,
Alternet, Huffington Post, MSNBC, Reason Magazine, etc.) Although these sources
often have solid reporting and analysis, their ideological slant is severe enough to
warrant their exclusion as sources in this writing, barring a few exceptions:

o In cases where the mouthpiece publishes material of uncanny excellence, such as

Lee Fang's breakdown of why special interests conspire to keep marijuana illegal,
an exception may be made and the material may be included in a citation.

o In cases where the mouthpiece publishes material contrary to its ideological slant
(Fox News supporting a traditionally liberal position or Huffington Post
supporting a traditionally conservative position), the material may be included if
it is itself reasoned and/or well-cited.

o In cases where the mouthpiece is reporting in areas to which it has demonstrated

significant expertise (such as Reason Magazine tracking police militarization), the
material may be included in a citation.

Barring these exceptions, ideological mouthpieces are generally excluded.

7. Wikipedia: is cited only for "general background" information on concepts, locations,

overviews of systems or historical events. Although its editorial team has proven
quite adept at ensuring adherence to facts, it's open-edit policy makes it unsuitable
for direct citations of specific information. However, for general topics (like how a jet
engine works, or the technical details of a nuclear reactor, scientific concepts, etc.), it
is extremely useful to provide background information for people unfamiliar with
the subject material in question. For this reason, relevant Wikipedia articles may be
included in inline citations.

8. Think Tanks: are generally cited as reliable even if they have an ideological slant
(Heritage, CATO, Violence Policy Center, RailstoTrails, ACLU, EFF, Jane's), as they
have demonstrated a high degree of ethics when adhering to facts as well as a strong
degree of expertise in the areas they report on. This writing generally considers these
sources reliable and factual, unless cause is presented to believe otherwise.

9. Industry Publications and Whitepapers: are material presented by organizations
with a vested commercial interest of the material they're reporting on. This
information is generally considered reliable and factual if it is itself cited and
provides calculations that can be independently verified. Corporate publications
(such as WattWay by Colas' data sheets) are generally considered reliable and factual,
all the more so if the company is established as false advertising and
misrepresentation of performance in advertising is often a crime.

10. Third-party blogs and statistics services: these sources include blogs like Nate
Silver's fivethirtyeight, the military blog War is Boring, and statistics services like, Officer Down Memorial Page and (for global
population in urban environments). In the limited area they are used, they are
considered reliable and factual, barring any reason to believe otherwise.

If you, as the reader, come across information that you believe to be factually false,
please get in touch with information as to why you believe this to be the case.


As Universal Energy is the work of one individual, I have limited resources to make sure
every aspect of the writing, at nearly 200,000 words, maintains 100% uniformity in
formatting. Especially as this writing exists in both document and webified form, there
may be an occurrence where a typo, broken link or mis-formatted paragraph arises.
Additionally, although all research was thorough and extensive, facts may change in the
future, and its possible certain elements of a fact were misinterpreted. In the event these are
brought to light, changes will be made, but it is not feasible to issue public retractions for
every change. For larger and more consequential issues, a change log page will be created
and released for public review.