Sunteți pe pagina 1din 155

MIND MAKING

The Shared Laws of Natural


and Artificial Intelligence

PATRICK ROBERTS
CorMind.net

Version 20100112

Copyright © 2010 Patrick Roberts


All rights reserved

Paperback ISBN 978-1449921880

Hardcover ISBN 978-0-557-12059-8


For Colin Patrick Roberts,
the first human mind I made.
Clusters

Lessons from a Machine Mind 6


Guide 8
Knots 9
The Taxonomy of Minds 24
Formalities 29
To What End 40
Means to Ends 45
I: Means as Ends 56
Change 62
Strings 71
Freedom 85
The Axiom of Things 92
Mind as Means 99
Acts of Language 108
Reason's Reasons 117
Constellations 122
Mediums 132
Engines of Thought 138
Definitions 149
Lessons from a Machine Mind

I could have written a book about ending war or feeding the


hungry but I wanted to write about something more
important: the thing that writes all books and ends all pains,
that solves every problem in the endless chain of problems
—mind.

There are already too many books on mind, by that name or


another. Frankenstein books, made from older books, cut,
shuffled, and sewn together. Evidently incomplete because
we still lack the means to make powerful minds, beyond
encouraging clever humans to breed. Even worse we lack a
science of mind with the word rightly defined and every
supporting study organized beneath it.
What more have I seen? The working minds I made and the
ideas that animated their mindless parts. No hearsay or
speculation. No idea prized unless it made a mind more
powerful, swift or elegant.
Books by the merely industrious instead revel in the
ambiguity of natural language, confuse the issue with a
splatter of conflicting official opinions, mention curious but
irrelevant facts, or mislead you with unscientific anecdotes.
This book is more ambitious. It is not on philosophy,
science or reality, but why and how minds might invent
such things. The laws of mind, not the laws of gravity or
electricity, but the methods of the mind that made these
tools. It is not on brains of neurons, controllers or
computers, but the logical possibilities of mind, from
minimal axioms deducing all the kinds of mind that are and

Lessons from a Machine Mind 6


can ever be. An exhaustive analysis of the fundamental
possibilities, not a grab bag of topics. Leave the gory
ephemera of human brains to neuroscientists.

This is the first volume of a Euclid's Elements of mind, a


universal eternal guide to minds written to be read for ten
thousand years. When the Sun is dead, an alien mind or
machine mind could read this and not only find it true and
useful, like math, as a system to impose on the reader's
world, but as a system fitting the reader himself, itself.

No jargon. Instead I take common terms that we presume to


be uniquely human or animal—painful, selfish, moral—and
show that they apply to all minds of certain classes,
whether made of metal, code or cells.

These laws of mind are all that can be true for everyone,
everywhere, forever. They can't be false because they made
truth. Always true, you need never doubt them. In your
mind, they are the last possessions you can lose. By
comparison, all other knowledge is trivia.

Lessons from a Machine Mind 7


Guide

Most chapters are of aphorisms: brief statements of


principle, largely self-contained. An exhaustive treatment
of more than the simplest classes of mind would be too
long for a first edition with uncertain appeal. Either way,
you can understand that I prefer to spend my time defining
mind in working code, not hazy prose. Sets of aphorisms
also turned out to be much less imposing to readers than a
relentless, intense and contrived linear style.
My words should be familiar though I mean them with rare
precision. Every key term is defined by one aphorism
before others use it. If you prefer to take advantage of the
aphoristic form and read the book out of order, you can rely
on the glossary near the end.

Excuse the sparse examples. I lack the patience to include a


dozen after every idea. Any of the few examples may fall
outside your expertise. If you're curious, I assume you can
find their explanation elsewhere. I don't want to burden this
book with fill. There is some value in such arrangements,
but that isn't the goal here.

The particulars are unimportant anyway. The real reward


isn't the short-lived facts reported but the enduring quality
of thought that a writer by chance applied to a subject.

Guide 8
Knots

Lines of cause and effect pull everything apart. What if


chance led a line to loop through itself? A causes B causes
more A causes more B, growing until exhaustion. What if
the loop twisted? Not-A causes B causes A causes not-B.
This knot of effects persists while all other things passively
disintegrate. I presume to call it a mind. Yes, merely this.

Put more plainly, a mind is a thing that acts, when needed,


to preserve a state: a temperature, a speed, a body.
Intelligence is possession of a mind, measured by class,
speed and size. By this definition, minds fill and surround
us: thermostats, servos, speed governors, regulated genes,
brains, every organism.

A minimal, thermostat-like, mind.

Mind, not agent, not cybernetic system, not negative


feedback loop with amplification, not a homeostatic
system, not any other verbose obscure term for the highest
thing in the Universe. You could bind this idea to a new

Knots 9
word. The whole system would remain as useful. But by
taking mind, I gain the foggy associations of a word with
history while adding a precise sense.

I find it arbitrary to reserve mind for a self-aware mind or a


self-reproducing mind when the bulk of mind works
beneath such distinctions. These objects, even without the
grand features of human minds, can show enough
intelligence to be worthy of the word.
Feeling doesn't make a mind. A simulation of a human
would be intelligent in every practical sense. Learning
then? Learning alone, not as a means, makes a parrot.
Meanwhile non-learning minds show purpose and
creativity. The human mind is a poor standard. Set the
threshold at the lowest level. Layer distinctions above.
The three parts of a mind:

1. End goal: a desired state.


2. Sense: how the present state is known.

3. Means: a way to change the state.


What is not a mind?

– All but a few computer programs. Code marches


in a straight line, blind to its effects. Blame
programming languages.
– Positive feedback loops—fire, unconditionally
expressed genes—even though they may self-
replicate.

Intelligent things are distinguished not only by persistence


but by varying action to persist. A water fountain persists

Knots 10
the form of water against gravity but it has no means to
sense its height or even that there's any water, much a less a
means to change the flow.

For any thing to preserve itself—organism, machine,


enterprise, nation—in an environment, mindless or
malicious, that devours the thing while changing the rules
—must have a mind or many minds within.

We thinkers remain intolerably stupid. We don't know what


we know, what we don't know, how we can reliably know,
whether there are limits, what they might be, or even how
to find those. We find some seemingly useful knowledge,
yet we don't entirely know how we found it or how to
guarantee finding more. Even that little knowledge will
become untrue at any moment, yet we don't know precisely
why or how to anticipate that. The supposedly eternal truths
of logic and math are not so clear when we fit them to the
world. Worse, few humans are unsettled by this, happy to
solve trivial problems. The few who find this odd may not
be clever enough. Even if they make a dent, they may not
want to share it with you.
The solution: define the mind that sees and solves every
problem. Our minds are too few, slow, small and low.

3
The goal: to know how not to know, to find the crowning
knowledge that raises us above memorizing trivia.

Knots 11
Don't strive to understand the countless passive things:
physics, chemistry, mindless machines and tools. A thing is
useful so far as you don't have to know how it works. Study
minds, the kind of things that can make and mind the
passive things for you.
We want autonomous, self-governed objects, not fragile
reflexive machines that fail daily and demand unblinking
attention. To complete great work, recruit, make and
improve other minds, within you and without, natural and
artificial.

4
Cars don't have legs. Planes don't flap their wings. Studying
the brain may be a long path to intelligence. In either case,
we must understand more kinds of mind than human.

All life, from single cells to plants and animals, must have
one or more minds that build and preserve the organism's
self. Mind making is the Frankenstein project, making
minds from mindless parts, life from lifeless parts.

Ranks of work. Move straight to an end, make a thing that's


desirable in itself: meals, artworks. Farther sighted, make a
tool, a thing that eases making desirable things: ovens,
paintbrushes, books. Best, make tools so general that they

Knots 12
ease making other tools: die casts, programming languages
(compilers are programs that write programs), minds.
1. Maker: build or improve mindless things.

2. Tool maker: make a tool to ease making or


improving mindless things. (Minds are tools that
make tools.)
3. Define mind in prose. Improve existing human
minds indirectly.
4. Mind maker: define mind in code. Make minds
from mindless parts.

7
Higher than a universal law of gravity, the universal laws of
the law-making mind. Then why do non-physicists read A
Brief History of Time or any popular cosmology book? For
false sensations of ultimate knowledge.
Religions once combined philosophy and cosmology.
Science now rules the second but the association between
the two persists unconsciously. The models of cosmology
have no use to casual readers and no lasting value because
scientists will soon find different metaphors. At best you
have the academic aesthetic pleasure of studying brilliant
solutions.

No expertise claimed in any suggested scientific

Knots 13
application. Simply that if an experience can be, at some
level, well modeled as the behavior of one or more minds,
in my sense, then my system of mind gives the deepest
framework. The usual use of theory: ideas to be applied by
specialists in ways unimagined by the theoretician.

Minds are complex combinations of deceptively simple


rules. Example: if an act fails to yield the desired effect,
retry. With every act having infinite conditions—facts that
must be true for an act to have the intended effect—a mind
can easily fail to know that one is, for a moment,
unsatisfied. Mere periodic retries solve an immense class of
problems.
Present machines rarely manage even this. Engineers labor
to add persistence to a few steps when the medium should
automatically apply it to every part of the system. A leap
from designing blind assembly line sequences of behavior
to building senses and defining goals.

10

How to prove a model of mind? Only by testing an


analogous combination of entirely mindless parts.
Otherwise, you remain trapped in endless debates, never
reaching certainties because you can't suspend your own
mind.
Twenty-five hundred years of futile verbal philosophical
debate ends. Philosophy becomes an engineering problem:

Knots 14
Machine mind m outperformed mind n in a
statistically significant set of tests. n's assumptions
about reality are wrong. m's are right and are
complete because m contains no minds but those we
made.
A new profession: philosopher-engineer. The methods of an
engineer with the goals of philosophy.

Philosophy made science, almost. A new science of mind


exposing the great variety of mind in nature. In a computer,
a non-trivial philosophy becomes an hypothesis to test and
compare. Essential philosophy remains above science,
something to be done in your head. That the machine can
judge is a judgment you can't delegate. Beyond that, you
shouldn't trust your mind more than you have to.
Theorizing is notoriously unreliable.

11

Psychology: the study of the soul, spirit, mind. Sociology:


the study of sets of minds. How were such promising words
hijacked by witch doctors? Why do the best human minds
prefer to study mindless objects? Those human fields must
offer even more challenge than physics because they are
imagined as immense permutations of it. A definition of
mind can be the bridge from the rigorous sciences to the
subjects that are now voodoo for lack of it.

Manipulators of minds vs. manipulators of the mindless.


Lawyer, politician, marketer, con-artist vs. physicist,
engineer, mathematician. There is more power, certainly in
the zero-sum sense that humans tend to follow, in

Knots 15
controlling others than in controlling the immense
remainder of the Universe. Easier to mislead existing minds
than to shape a mind from mindless matter.

12

What each profession sees in a definition of mind. To the


psychologist, a model of the human mind, unburdened by
the technicalities of neurons and chemistry. To the engineer,
designs for more reliable, powerful machines. To the
philosopher, proven ultimate reality. To the lay reader,
better knowledge of his mind, and of his world as an effect
of that mind.

13
Methods outrank truths. Few truths are really such. A
means to truth will outlast most of its results. In other
words, a method for uncovering useful facts will remain
useful longer than any one of the facts it made.
You advance the study of mind not by asking whether any
idea is true or real or any other concern with objective
being, but by asking what use is such a distinction to a
mind?
A possible Copernican shift, testing the movement of mind
to the center of our systems. A shift away from matter, but
not merely back to ideas. Instead, to mind, the cause and
use of ideas, matter, and things—tools then interpreted as
merely a mind's oldest and greatest inventions.

Knots 16
A return to idealism in the philosophical sense that you
cannot disentangle reality from mind, but now with a
purely material definition of that mind.

Some thinkers fear imagining any thing, at least other than


humans, as having intentions, purpose. I suppose they want
to avoid the anthropocentric excesses of the past: ghosts,
spirits. Mind is physically real. Is there a physical negative
feedback loop or not? The errors followed a human, impure
base definition of mind. With a pure definition, there really
are minds, spirits, throughout Nature though not in every
thing and not of our class.

Not teleological (teleology: the study of purpose in nature)


vs. mechanistic or purposeful vs. purposeless. The
mechanistic is the means of the teleological, including the
teleological's explanation of itself.

14

In what form to define mind? English, any natural


language, is a needlessly poor form for definitions of mind:
horribly ambiguous and presuming precisely what is to be
defined. Syntax alone implies things, subjects and objects,
causes and effects. Formal languages remain: math, logic,
code—any system that a machine can evaluate. But even
the present best languages of thought—propositional logic,
first-order logic—still stand on those evolved prejudices.

The formal definition of mind is the authority, the primary


point of truth. Formal language's advantage: when
evaluated, it yields a reliable result, at least in some ring of
an expression's endless waves of effect. Another mind

Knots 17
interprets natural language—English, French—largely with
its unconscious, often using terms so opaque that no
conscious definition is practical.

15

Slim returns from a half century of artificial intelligence


research. Causes? Naive philosophical misconceptions.
Failure to ensure transparency and relevance of results to
human minds. Misranking the problems of mind: putting
learning and logic above robust perception and action.
Philosophy and psychology failing to contribute complete
practical models.

16
No problem is ever really solved. A particular problem
occurs in the past, and, like anything, will never entirely
reoccur. We invent a class of problems, a class that would
have contained the original problem. Then we plan
prevention of any problem in that class. The solution's
value is proportional to the number of problems likely
prevented, divided by the cost of the solution.

Deep solutions vs. shallow solutions. The deep, those to


large problem classes, tend to cost more than the shallow.
What of a class containing every problem that can ever be?
Are there at least partial solutions to it?

Mind is all that's shared by every answer. Improved mind


pays more than any single shallow answer. While we wait
for the return on an investment in mind, we must content

Knots 18
ourselves with crude solutions to urgent problems.

Not that any problem is fully known. A mind can only build
a model good enough that its time is then better spent
improving another model or a model of models.

17
For those who somehow think that increasing computer
speeds will ease defining mind. More speed is as likely to
find mind, much less one in a form clear to us, than
manufacturing more typewriters and breeding more
monkeys will speed the reproduction of Hamlet.

Evolution stumbled on some minds, but we don't have


billions of years times a billion billion billion
simultaneously evolving single celled organisms. Even
with that path shortened, they, like us, might evolve distinct
interests.
Parallel computers won't help us either, not being parallel
enough or even essentially different.
Are quantum algorithms the magical solution? You can
improve code as much by finding faster classical
algorithms. In any case, a mind should be adapted to the
more common environment of classical machines.

18
Mind is like breathing: physically complicated,
superficially trivial, and too important to trust you with.
Awareness of a thing is often a symptom of disease. Mind

Knots 19
works well so far as you're oblivious to it. Hence my
tortured language, having to tease or drag into sight
naturally buried assumptions.

19

Reread philosophical puzzles in terms of their use to a


mind. Example: react to the mysterious mind-body
dichotomy by asking what use is it to a mind to cluster
some sensations as physical and others as mental?

20

The first of my immodestly named Roberts laws of mind:


don't die, unintentionally. Your clever feature is worthless if
the mind doesn't survive to enjoy it. More than action
threatens mind. Sensations can pour into a mind faster than
it can swallow them. Inferences can form unexpected loops.
The first laws of mind:

1. Don't die. Ensure you can continue to act.


2. Don't stall, being little better than death.

3. If an act fails, retry.


4. If an act fails, act differently.

5. But don't act in useless circles.


6. Do not lose valuable beliefs.
7. Doubt everything.

Knots 20
8. Minds die, so have more than one.

21

The Mind Project:


1. Define mind in formal language.

2. Test on machines. (The mindlessness of


machines ensures completeness. You can't trust
a mind-filled human to verify a definition of
mind.)

3. Translate the proven definition into:


– Lessons for men, composed by writers.

– Designs for man's machines, built by


engineers.

– Models of natural minds, human and not,


applied by scientists.

The project's rewards:


1. Best resolution of philosophical concerns.

2. Best understanding of all natural minds—cells,


ourselves—within a framework of all mind.

3. Machines built from the best definitions,


skipping the constraints of human minds.

The goal isn't only to know ideal mind but the common
possibilities of working minds.

Knots 21
22

Reorganize all subjects as branches of the study of mind.


Philosophy: The study of mind. Within it, metaphysics and
epistemology as how minds make worlds, ethics and
politics as the design of redundant cooperating minds.

Psychology: The study of human classes of mind.


Cybernetics: The study of negative feedback systems,
mind in my sense, so merge it with philosophy.
Artificial intelligence: The construction of minds from
mindless inorganic parts. An alias for philosophers who
wanted military funding.

Neuroscience: The study of minds made of neurons.


Cognitive science: Coined as an alias for AI after it became
an embarrassment. Merge with philosophy and AI.
Computer science: The study of algorithms and data
structures for artificial minds in computers.
Game theory: The study of strategies for competing minds.

Control theory: Cybernetics again.


Multi-agent systems: The study of interacting minds.

So many redundant scholar accommodating specialties.


Once the overlaps are pulled, all that remains of lower
fields should be the technicalities of translating the
universal laws of mind to the field's medium: electronics,
code, neurons.
Philosophy, then artificial intelligence, then cognitive

Knots 22
science. Philosophy became a con-man: make grand
claims, pocket funding, have little to show, then change its
name to conceal an again ruined reputation.

23

Climate change, peak oil, overpopulation—fads. All that


matters is the number and power of cooperating minds. The
deep problem: how to replace squabbling apes with a
nation of strong minds.

Knots 23
The Taxonomy of Minds

What are the possibilities of mind? Minds, in my broad


sense, may occur with and without learning, reason, a sense
of self, or other features. How can each feature vary?
Which features presume others? How to systematically
define all branches of mind?

Past classifications were too anthropocentric: stages of


development from infant to adult, object persistence
without explaining objects.
I introduce a sort of Linnaean classification of the variety
of mind, a tree of kinds of minds, climbing from thermostat
to man, and later to kinds above man.

What use? With an existing mind, see its kind then deduce
the mind's powers, limits and means of control. When
building or improving a mind, the taxonomy exposes
prerequisites of common kinds of intelligence.

The lowest class of mind: one end, one binary (having two
states: true or false, 1 or 0) sense, one means with only one
intended effect. Example: a thermostat.

Classes of Mind
The parenthetical letters offer shorthands for basic mind
classes. Example: L-mind for a mind that learns from
experience.

The Taxonomy of Minds 24


A basic taxonomy (hierarchical classification) of minds.

Mind

Above all, a mind must see something of its universe,


which may include what an observer would consider the
mind's self. An active mind bases its acts on these beliefs,
and a passive learning “mind” at least sees how they
change.
All but passive learning minds must act. Even if a mind
doesn't know the effects of an act, it knows when an end is
met.

The Taxonomy of Minds 25


Counter-examples: parts of the human brain when asleep,
spam filters. (Passive P-minds)

Choose Means (V)


Does it have redundant means to the same ends? How well
does it move between them? Counter-example: thermostat.

Mutate (M)
Can a mind naturally gain and lose new ideas in its
lifetime? Counter-example: a thermostat can only believe
one fixed idea of temperature.

Doubt (D)

Is it eventually free to lose some or all beliefs? Or is it


wired to obey the implications of every sensation?

Sense Itself (I)

Does a mind have the senses to see the physical conditions


of that mind?

Preserve Itself (A)

Does a mind also have the means to preserve or reproduce


itself? Examples: all life because a living thing is, in part,
defined by making and preserving itself for a time.

The Taxonomy of Minds 26


Sense Minds (N)
Does a mind understand mind, at least of lower classes, and
how well does it apply that to itself, to others?

Sense Kin (K)


Can it recognize the redundant minds, or at least the bodies
of minds, that it was designed to cooperate with?

Learn (L)
Does the mind's behavior change from experience? Does it
learn associations? (LA-mind)

Feel (F)
We imagine that an equally intelligent machine would lack
our conscious experience. Examples: yourself, presumably
other humans.

Communicate (C)

Can it share beliefs with other minds?

Certain classes of mind can raise themselves to certain


higher classes, or a mind can be in a class thanks to ideas
formed and injected by another mind.

The Taxonomy of Minds 27


Are you above a problem or beneath it? Lower minds are
incapable of certain errors. The more powerful the mind,
the more kinds of problems it must defend against, and then
defend the defenses.

The Taxonomy of Minds 28


Formalities

A mind can't be static. It must have a changing set of


beliefs. At the least, a belief in an unreached end. This set is
its universe. An idea that a mind can believe is a form: a
pattern of sensation, not things or objects which are more
complicated than a single form.
A thing exists to a mind no more or less than believed
forms describe. If two things have the same form to a mind,
then the mind sees only one thing.

Forms of forms. In the case of a thermostat, a mechanism


in one position or another. Higher classes of mind require
trees of distinctions within distinctions.
A mind bothers to keep a distinction because its state—true
or false, up or down, light or dark—coincides with success
of an act. Perception as biography: a finite mind tends only
to see what serves it.

3
Essential beliefs: ends and means. In higher minds,
inferences. Each must fit into itself and every other, free to
form the endless loops and spirals of deeply intelligent
behavior.

Formalities 29
A case of the value of recursion, of applying powerful ideas
to themselves:
– Find the patterns. Find the patterns in the
patterns. Learn how to find patterns. Learn how
to learn. Find the patterns of learning. Search
for patterns. Search for search methods.
– Write code that writes code.
– Judge the value of your values.
– Define the process of defining processes.
– A replicator that can replicate itself.
– Invent a machine that invents.
– The evolution of evolvablilty.

Example forms.

4
An idle proof:

Formalities 30
1. Assumed fact: Every thing is unique. (Not
necessarily true for a trivial mind or for any
mind at low levels of sensation.)

2. Inference: If two things aren't the same, then


they aren't equal, at least not for all uses.
3. Inferred fact: Every thing is unequal. Nothing is
equal to anything else, or even to itself past an
instant.
4. Inference: Two sets or groups of things are only
equal if every element in one equals an element
in the other.

5. Inferred fact: No groups are equal. Every thing


or set of things is unequal. Nothing is the same
but so far as the differences seem not worth
knowing. Belief in equality is at best a useful
provisional lie.
In principle, any thing or set can be equalized, can be
turned into another, but then each thing has an unequal cost
to become equal.

Kinds of realities for minds to sense and control: discrete


vs. continuous, finite vs. infinite, opaque vs. transparent,
regular vs. random. The simplest assumption is that all
minds ultimately live in one Universe of infinite
dimensions each infinitely divisible.

Formalities 31
6

Beliefs vs. engine. An engine moves a mind towards its


goals. Beliefs define the goals, means, and state. Examples:
DNA vs. a cell that translates DNA to protein, a belief
database vs. a computer program that reads and updates the
database.
In a brain, beliefs are inseparable from the engine. In a
computer, an engine can be distinct and applied as easily to
one standardized set of beliefs as another.

A sequential mind.

Formalities 32
7

Everything is everything. Every one, however briefly, is at


times wise, foolish, bold, shy, evil and virtuous. You
distinguish things by their proportions. Sometimes a liar
will tell the truth and the honest lie. Are the ideas liar and
honest useless because they mistake the men for an instant?

8
Time. As a practical matter, a mind designed by a human
must presume time, but a simple mind's beliefs needn't
include the distinctions: past, present, future. It can live in
an eternal now.

Senses can lie about time. A sense may conceal gaps,


failures or the delay it adds. A mind may allow it to lie so
well that you would even remember believing an idea
before you really believed it.

9
Sensing senses. A mind can have beliefs it finds to be
conditions of sets of beliefs. You can see without eyes. A
philosophical distinction: a posteriori, knowledge gained
through the senses, vs. a priori, knowledge gained without
the senses. Not that knowledge is really known to be
received through the senses, we merely find it useful to
imagine so. The idea of a sense—eye, ear—is an invention.
How does a mind discover a sense beyond what, if
anything, the mind did to make it?

Formalities 33
10
How to prevent, detect and resolve the inevitable
corruption of beliefs? DNA examples: mutation, copy
errors.

11
Beliefs ranked:
1. Ends, inferences to ends. A mind can remake
anything but the knowledge of what it should
make.
2. Means to means, then more specific means.
3. Mere facts.

12
Unawareness of x vs. the untruth of x. Not-hot does not
equal cold. A mind can merely be not hot because it feels
no temperature. The exclusivity of hot and cold is a learned
negative suppressing inference between the two.

13
Inferences to inferences. An inference from x to y causes a
mind to believe y when it believes x. Inferences from and to
beliefs may be beliefs themselves. This preferable form
gives a mind some self-awareness of its thoughts.

Formalities 34
14

Trees of binary inferences. With inferences to inferences, a


mind can infer from “x and y” to z using a nested pair of
simple inferences—an inference from x to an inference
from y to z—instead of complicating the engine to support
inferences from more than one belief.

15
Exhaust the basic permutations of the special forms.

Means to means: Allow a mind to expand its powers.


Goal to means: Allow a mind to know the need to expand
its means.
Goal to goal: What use?

Means to goal: What use?


Goal to inference: The desire to know what, of some form,
a mind can infer from a belief.
Means to inference: Allow an unconscious mechanism to
produce complex inferences.
Inference to goal: If from a goal, captures a condition of an
act, regardless of means. If not from a goal, captures a
conditional end.

Inference to means: Allow a mind to perceive a conditional


means. Differs from a means having conditions.
Inference to inference: Form complex inferences by
combining simple ones. Equivalent to the logical AND

Formalities 35
operator.

Inference from goal: What use?


Inference from means: What use?

16

Human minds separate short term memories from long


term. Is this distinction an inescapable feature of any mind
or does it only reflect a technical weakness of brain minds?
Long term memories may require formation of expensive
physical connections or investment in another optimization.
May any mind benefit from an investment in lasting
memories?

17
Forgetfulness. Most finite minds sense more beliefs than
they can hold. How to choose what to keep? One method: a
long-term bias that holds beliefs with consistent but sparse
use and a short-term bias that gives recent beliefs a chance.

18
Bandwidth. How many sensations can a mind handle per
second? How deeply? Can it reliably ignore more?

19
A pawn: Your x isn't real because it has fuzzy edges. The

Formalities 36
speaker, parroting a malicious script, presumes a level of
philosophical strictness applied only to ideas that she
dislikes. What next? Does dawn disprove the day? For a
mind in a non-trivial universe, almost everything has
unclear limits. The real question: how best to draw lines
and when to redraw?

20
Why doesn't a mind just delude itself into thinking that it
reached its ends? Whence a desire for truth towards
yourself? Especially when at bottom a non-trivial mind
constantly presents simplifications, lies. Why not accept a
faulty sensor or false beliefs? How to organize such
resistance? A partial answer: redundant senses.

21
On average, x is y. What use is this hedge: on average?
Every statement about things in the world has exceptions.
Even the exceptions have exceptions. Every statement is an
obligatory average, a claim that the exceptions aren't worth
keeping in mind.

22

How a mind groups forms, how it generalizes or


categorizes, is unlimited. How to choose? In terms of the
mind's interests and what coincides with those. Is there an
objective categorization? One true for every mind? No,

Formalities 37
categories, abstractions exist to simplify each unique
mind's predictions.
I recall criticisms of the Dewey decimal system's
Eurocentric allocation of the higher numbers. A mind's
starting point: nothing exists, everything is the same. If a
difference seems to change the effects of our acts, then we
admit a distinction, a pair of categories and assign
sensations to them. We shouldn't be surprised by the
unrealism of politically compelled assumptions. Nor
surprised by the impracticality of any idea inferred from
them. If one took them seriously, the only correct
categorization of reality is x categories for x many infinite
objects, without hierarchy.

You could average the interests of present humans, but


most humans read too little to deserve inclusion. Then of
most literate humans, updated as demographics change? An
engineering solution that dodges the politics: cluster objects
according to a machine made model of each human's
interests. One downside: this may muffle discussion
because the categories the machine discovers may
correspond to no word or expression, though the machine
could confine itself to such categories. The top of your
taxonomy would summarize your interests.

Demolish dumb ideas by taking them seriously, not that


their proposers meant to help us with them.

23

The idea of a problem, like any idea, is a simplification. An


unsolvable problem may only seem so. Study the input

Formalities 38
more closely. Find an overlooked distinction that allows a
solution. Much of mind is the twin work of adding
distinctions, seeing again how two things differed, or
removing distinctions, seeing how two things are the same
for your use.

24

Appearance vs. reality. Only appearance is real. Reality is a


mind's useful fiction.

25

A mind is no better than its senses.

Formalities 39
To What End

Goal: a belief that causes a mind to act. Goals surpass


simple A → B reflexes by defining only the result, not the
means, isolating what is wanted from how it is done. Any
act at any time could precede any result. The best mind is
free to try anything. A thing is mindless so far as it falls into
unchecked habit, neglects its ends and ignores the effects of
its acts. Ideally, a mind can suppress, doubt, forget, and
infer to and from a goal, like any belief.

End vs. waypoint. End: a goal not conditioned on a means


to another goal. Waypoint: a believed condition of an act
that may move a mind to an end. A mind forgets a waypoint
when the end is met. The last case: a goal conditioned on
another goal, but not as a condition of a particular means.

3
Equilibrium: the state in which a mind needn't act, when all
its ends are met. The material reflection of a goal is
whatever thing, when changed, causes a mind's equilibrium
to change. Example: a thermostat's coil.

To What End 40
The endless and unintended effects of an act.

4
Plan: a directed web of goals conditioned on super-goals.
Plans can emerge from inferences from goals to conditions
of those goals, or from the particular preconditions of a
means.

To What End 41
5
A mind must rank and re-rank goals. The parallelism of a
human brain spares it from the scheduling done by a
software mind running on a relatively serial computer. But
a brain, finite, like any mind, must still sort the allocation
of neurons, blood, oxygen, energy.

In what order then? At first, favor recent goals and those to


which progress was recently made. Regardless of the initial
bias, when a mind repeatedly fails to reach favored goals, it
must become free to choose goals at random. If some of the
goals must be satisfied in a sequence unknown to the mind,
choosing goals at random ensures the mind will eventually
stumble on the complete solution.

6
A mind never knows every detail of what it wants. I don't
know the official specification of a twenty dollar bill, but I
do have a good idea of an acceptable one, and while a more
precise idea may expand its acceptance, the chance is so
slim that it isn't worth the trouble. Thoughts, details,
distinctions are never free.

7
Consider a mindless object: a shower fixture. In this case,
man-made, but that makes no difference. Mindless, brittle
and annoying—it routinely burns and freezes you. Can a

To What End 42
mind improve it? A common fixture knows its state as
maximum flows of hot and cold water. Add valve actuators
and senses of temperature and pressure. The fixture's mind
continuously adjusts the low-level water flows to ensure the
desired temperature and total pressure.
Better, access to your subjective sensation of temperature.
Best, if it knew that the true use was to be clean, assuming
it had any better means to cause that. A mind is as useful to
you as the level of its ends nears yours, the higher the
better.

8
When a goal is sensed, a mind should initially suppress
pursuit of certain other goals on the assumption that when
the new goal is reached the others become academic. If the
mind can't promptly reach the new goal, it should begin
interleaving pursuit of the other goals. Example: x and y are
believed mutually exclusive, so a goal to y suppresses a
goal to an inference from x to z.

Values: goods, bads, evils, commandments. Example:


Persistence is good. You can define the word, in the case of
a good, as a common condition of a mind reaching its ends,
or as the opposite if an evil. By setting a good as desirable
in itself, a mind's true ends can benefit from the value
without initially understanding how.

Personal vs. social values. Some values are good for a mind

To What End 43
alone. Social values help a set of redundant minds
cooperate to reach their ends. You can expect a society to
impose both kinds of values on its members. A danger:
what if a society's discovery and promotion of values was
hijacked?

To What End 44
Means to Ends

First principle of action: separate every act from sensation


of the desired effect. No presumption of success. A belief
about the effect of an act is only a hope. An act is not its
effects. Action is opaque. Effect is endless and uncertain.

Means: a belief that a mind applies to reach goals.


Examples: a motor neuron to activate, code to execute, not
the believed conditions of an act.

3
Separate belief in a means from each belief about its
possible effects. The association between an act and an
effect is never certain. A mind must be free to individually
sense and forget such beliefs. A believed effect of an act, as
a prediction, can be unconditioned or can be inferred from
other beliefs.
Strictly, certain pairs of act and effect are improbable to us,
thanks to our high minds, long experience and complex
models. At bottom, a mind cannot make subtle distinctions
about probability. It must allow any binding of effect to act
to be made or doubted.

We observe only coincidence, not cause. Our models of the

Means to Ends 45
world might, in some cases, be very reliable, yet we still
routinely find errors in them. Make the simple honest base
assumption that we can never know for certain where the
errors in our models are, so anything can cause anything,
with whatever links between.

Why separate action from sensation? A simple act could


itself cause a belief—bad engineering. You may have
already reached the goal for which you would act. Separate
senses from means to save acts and free the mind to
discover other means.

5
Means to senses. The senses that independently detect the
effects of acts are ideally the effects of earlier acts. A sense
should have no special status to a mind. A sense is merely a
believed condition of the perception of the expected effect
of an act. Senses are only effects of means. A mind could
easily miss that a thing is a sense, that it is a condition of
belief in a class of beliefs. Senses are discovered. What use
to interpret a thing as a sense?

6
You never untie the same knot twice. Ignoring minds with
few and discrete senses. All acts are creative because every
moment is unique. Even selecting what to ignore, to make

Means to Ends 46
moments compare and the past apply, is a creative choice.

A means with a condition of a sense.

Means to Ends 47
Thoughts are not distinguished from matter by lack of
effect. Thoughts, like every thing, are part of the endless
web of effects, whether you see the links or not. Thinking
alone affects neurons, oxygen consumption, or CPU heat.
Mere arithmetic can destroy a poorly designed computer by
overheating it.

8
Mental acts. Not all acts are physical. Adding two numbers
in your head is an act.

9
The most general means are the most interesting. Example:
one way to effect anything is to ask another mind to do it.
This means is tricky to sequence. A mind that resorts to it
too early will annoy you with requests. A mind that tries it
too late wastes time trying to succeed alone.

10

Imperative and functional computer languages prejudice


one effect of every statement: the return value. Example: 4
as the value of 2 + 2. Pure functional languages outlaw any
other explicit effect. Wise to favor purity but functional
programming purifies a misconception. Instead of
pretending that every statement shall only have one known
effect, it should admit that every statement has infinite
effects, far beyond those intended by the programmer. Only

Means to Ends 48
teleological (goal oriented) programming recognizes this
philosophical truth.
Imperative:

1. Try to move forward one step.


2. Try to turn right.

3. Try to move forward one step.


Teleological:

‒ You want to be at 1, 1.
‒ You have a sense of position, now 0, 0.

‒ You have a sense of orientation, now North.


‒ You can try to turn.

‒ You can try to move forward.


Shallow differences of syntax and punctuation—little more
separate the countless weak imperative languages that
programmers debate and rank.

11

Every mind's situation is that under certain conditions,


certain acts tend to precede approximate effects. Ultimately,
we do not know why, though we can indefinitely elaborate
our models in whatever direction seems most promising.

12

Means to Ends 49
At any time any act could precede any effect. No mind can
ever exhaust the possibilities for action. It can only
discover, plan, and rank them.

13

Means to means. A mind could originally believe in only


one means with which the mind would make the next layer
of means. A wise mind in a powerful body could begin with
only a bare mind believing in a single means.

14

Passive mind: a mind without means, or the ends to give


them purpose, or both. Strictly, not a mind at all. Any use?
Maybe to isolate learning from an active mind. A passive
mind that learns associations could have ends to focus its
attention. A mind with means but no ends could act
randomly to build a general purpose model of its world.

A passive mind would not even apply itself to making


senses or communicating what it learned. This mute mind
must be transparent to whatever uses it.

15
Throttling. No act's intended effect is immediate. How long
should a mind wait? Not as long as it takes. It may take
forever. Yet any length is more intelligent than that or zero.
For one act, the gap is a millisecond, for another, an hour,
in both cases, never precisely the same again. How to learn

Means to Ends 50
the lengths?

At least, as a moving average of all a mean's acts. Better,


inferred from a single act's purpose and preconditions.
Best, inferred from any relevant belief. Anticipating an act's
delay could be another act with a prediction as its effect. If
the mind waits too long, it wastes time waiting on a
hopeless act. If the mind retries too soon, it may overload
the means.
The time when an acts starts vs. the time when the act itself
ends vs. the time when the intended effect is sensed.
Continuous vs. instantaneous acts. Acts, like a thermostat
keeping the furnace on, that have an increasing effect vs.
acts that quickly end and have a fixed effect.

How can a mind enforce a throttle? At worst, entirely in its


unconscious, in its engine. Better done, as usual, in terms of
beliefs. An engine could sense its own use of a means.
Then the mind could infer from such beliefs a temporary
suppression of belief in any matching means.

16
Act sequencing. How to know what act to try next? One
rule: try specific means before general ones. When to start
trying another means? Or the same means in a different
way? How best to interleave retrying multiple means? Not
all minds have such a sequencer. Example: a thermostat has
only one means to its end.
Any answer, no matter how often wrong, is an immense
leap over a system so simple that it doesn't need an answer,

Means to Ends 51
that never retries or varies an act.

17

Parametric means: a means to more than one intended


effect. How does a mind apply a means that can have
different effects? How do the particulars of the goal
connect to the act?

18

Basic problems of action.


Broken actuator: How to recognize that a means no longer
has the expected effect? Maybe it only fails to have certain
effects under certain conditions. How to reset a frozen
actuator, especially in a sequential mind medium?
Delayed effect: The time between the act and the intended
effect exceeds the mind's expectation.
Overshoot, oscillation: An act may have more than the
desired effect.

19
Don't die. Any act can have any effect at any time. This
includes the mind's death. A mind inevitably uses means in
conditions unseen by its maker. An active mind must
barricade itself from the dangerous side-effects of every
act. Examples of software errors that are fatal if uncaught
or unstopped: an exception that would end the thread, a

Means to Ends 52
process that exhausts the CPU or memory.

Engineers typically limit their machines to make errors


unlikely. A non-trivial mind's use is to act creatively. A
mind maker can't contrive a safe path for a mind meant to
find new paths. Don't avoid errors. Attack error itself.
Making a mind robust involves challenges caused by the
nature of mind. Others by the technicalities of a mind's
medium, which I leave to the expertise of different readers.

20
Parallelism as a prerequisite of high speed. To keep pace
with other minds, a mind must commit acts, including acts
of thought, in parallel, not one after the other.

Intelligence as a prerequisite of parallelism. Mind isn't


cheap. Compared to blind action, the overhead of sensation,
inference and selection is immense. But the awareness of
the conditions of action and their exclusivity can repay the
investment.
Every act has conditions of yielding particular effects. A
mind can safely commit two acts in parallel only if none of
their conditions, and the conditions of the conditions, are
exclusive. So only a mind that learns negative associations
can discover what acts it can commit at once. Present
computer programs depend on the mind of a human
programmer to see the independence of acts.

Noticing the relationships between conditions exposes


another means for a mind to conceal its overhead: first
pursue conditions shared by multiple acts.

Means to Ends 53
21
Telling of errors in means as a means. Preconditions: senses
of errors and language to discuss them.
Scenario: a mindless machine fails.

1. You cause the machine to attempt x.


2. x fails and the machine at best manages to show
an error message.
3. The message gives too little information. The
machine offers no way to ask for more. You
guess.

4. You again tell the machine to do x.


Contrast: an intelligent machine interacts.

1. You give the machine a goal to x.


2. It repeatedly tries to cause x using means a.

3. While retrying a, the machine tries means b,


which happens to involve telling you of errors
that it associated with a.
4. You ask for a detail.

5. You correct whatever caused a to fail.


6. The machine successfully retries a before you
can tell it of the change.

22

Means to Ends 54
Act to know the world vs. act to change the world. How
and why to distinguish between an act to cause a sense,
which will change the acting mind's beliefs, and an act to
change the world, which also changes a mind's beliefs?

23
Parts of an act:

– Expected effect.
– Means: how a mind initiates effects.

– Conditions: what must be so for the means to


have the intended effect. Kinds:

– Preconditions: what must be when a


means is applied.

– Co-conditions: what must be for the


intended effect to occur and persist.

Means to Ends 55
I: Means as Ends

1
Selfishness: one of Nature's greatest inventions. What use
to anyone is a thing without the will and means to preserve
itself as a means? All life is in the class of self-making
minds.

2
How to well define self? What use is this distinction? When
would a mind's act be more effective because it rightly
judged a thing as its self or not? What is the simplest case
of the use of a self? Is a self only relative to other minds?
Like most of our words, our idea of self is mostly
unconscious and opaque. We are looking for a useful
conscious precise definition that fits our intuitions.
Human selves are a tricky place to start. Our bodies hold so
many minds. First consider a thermostat. What might its
self be? Not the furnace. Start with giving it ends to its
present means, its present body: coil, furnace control. To
best know the self study a purely selfish mind.
Or consider yourself. If your brain irreversibly stops
thinking, do you exist? No. If your brain runs, but your
beliefs, in your cells or in your brain change, especially
your ends, are you likely to be the same person?
Your essential self is your mind, or your ultimate minds,
their beliefs and the beliefs of the minds made to serve

I: Means as Ends 56
them. Everything else is an expression of them or the
means to them.

3
Levels of self-knowledge.
1. Selfless: A mind without even the senses, or
means to such, to perceive anything you would
consider its self.
2. Self-aware: Lacks goals to its self-preservation,
as an end or a means, because it wasn't born
with, or didn't learn that, those sensations are
conditions of any act.
3. Self-interested: Its self is merely a means to
extrinsic ends. Even with this intention, a mind
may lack the means to preserve itself.
4. Selfish: It has no ends but to its means. It exists
purely for its own sake.

4
Any mind that learns associations tends to become selfish,
at least as a means. It will discover that things we would
consider to be parts of its body, though originally extrinsic
to it, are conditions of most of its acts. Then a desire to
duplicate this self-thing, to reproduce, for redundancy and
power.

I: Means as Ends 57
5
Death. In a strict sense, you die in every moment because
you constantly change. In a more practical sense, there is a
useful pattern that persists for a worthy time then quickly
halts. In either sense, your brain mind changes, but it
remains a means to deep fixed minds.
What use? To replace a mind not worth repairing or
upgrading in place. To end a malignant mind, untied by
accident or malice from its designer's ends. Why not let the
mind live? Competition for finite resources: food, mates,
CPU time.
Death defined: not the loss of feeling but the unwilling loss
of beliefs. Especially the beliefs that the mind can't easily
reproduce from those remaining. Or an irreversible end to
the mind process that pursues its ends.
Every mind's body is falling apart. True death isn't the loss
of the body but of its design in a form that a still active
mind can and will read. Oddly, genome minds can reliably
reproduce their beliefs through cell division, but genes gave
brain minds no direct means to duplicate their own beliefs.

6
Suicide. Why make a mind want its own death? Kindred
minds can see and kill a malignant mind. A self-sensing
mind, with only the same mechanism, would see its own
corruption and use whatever health remains to kill itself.

I: Means as Ends 58
Mind defense. Every feature exposes a mind to new threats.
Example: autogenocide. What if a mind was falsely led to
believe that it is malignant? That the preservation of its self
or type was evil or otherwise painful? Efficient delegation
of destruction to the victim. Could this only occur as an
attack by competing minds? Or is there a use to a mind
maker?

8
In a purely selfish mind, an eye is a means to an eye. An
eye helps its body to protect and feed the eye. But the rest
of the body might find a better means than the eye. The
genome mind would not immediately recognize that the eye
is superfluous. The end to it may only whither. In an
animal, an eye isn't a means, it is part of being what it is.
How mutable is a purely selfish mind? Would it change?
Willingly?

9
Senses of self and their use:
1. The beliefs of a mind and the minds that serve
it.
2. Awareness of the proximate physical conditions
of most acts. What an observer would consider
a mind's self ought to be. The weakest sense of
self since the mind would be perfectly happy to
have its entire body replaced with any other
that's as effective. This sense also blurs,

I: Means as Ends 59
radiating from the center of mind though
strongest at its body.
3. In a stronger current-means-as-ends sense. For
you, your body is not merely a means to
extrinsic ends. It largely is a self-perpetuating
end.
4. To detect a mind's own corruption or deviation
from one's inferred role.
5. Abstracted to identify kindred minds.
6. To improve imitation by favoring minds most
similar to yourself.
Identifying kindred minds and detecting a mind's own
corruption could share the same method. The two differ
only in which mind they're applied to.
Note that most of these senses of self can occur without the
mind living in a society of other minds.
Many of the problems with seeing your self or other selves
are just cases of the common problem of clearly seeing
anything.
Knowing all these causes, a mind might plot to change its
sense of self.

10
When a believer in embodiment says that a mind must have
a body to become intelligent, he should mean that a mind
must be aware of the proximate physical conditions of its
mind. Or in the sense of intelligent to us, it must be in our

I: Means as Ends 60
world and sense and act on our shared world in terms
analogous to ours.

11
Reproduction: A mind causing another mind with similar
beliefs. A mind needn't know in scientific detail how to
reproduce itself. Merely that certain acts tend to precede
sight of a similar mind. In a trivial case, a machine mind
could reproduce itself by asking a human to buy a computer
and copy its code to the new machine. The only distinction
is that its reproduction has more dependencies—humans,
computer, factories—than ours—air, water, other humans.

12
With self better defined, what can selfish and altruistic
mean? At bottom, every act is selfish, made to serve only
the mind's ends or emotions. Example: pleasure in generous
feelings. You could dismiss this sense as trivial, but some
humans do seem to misunderstand it.

I: Means as Ends 61
Change

1
Kinds of learning in the broadest sense of ways a mind may
change from experience.
Remember: Simply the retainment of any belief beyond an
instant. A common computer language tends to lose every
computation's result because the language has no way to
know the conditions of a result's truth. Pure functional
languages cheat by contriving that all results are
unconditional.
Mutate: Sense new beliefs, not limited to the belief and
suppression of a few fixed ideas.
Habituate: A measure of one idea. Examples: overlook the
useless flux of a belief, track the general value of a means.
Associate: A measure between ideas. Associations may run
both ways, not distinguishing cause from effect.
Are other kinds of learning possible?

2
Cause vs. effect. Empirical causes are unprovable. All a
mind can see are associations. The intended effect of an act,
one amongst infinite effects, has no status outside the
acting mind.
Empirical vs. logical causality. Empirical cause: a mind's
belief in what must exist for another thing to exist. Logical

Change 62
cause: a thing's imagined parts.
Every thing has infinite causes but every mind is finite. A
mind can only afford to know the causes that are likely to
need the mind's action and that are within the mind's power.
What caused x? What precisely might we then mean by this
question? If we wanted to end x, then the answer would be
a state that's disbelief coincides with disbelief in x.
Is there an alternative to cause and effect?

3
Habitual blindness. The activity of a mind's senses could
easily exceed the mind's power to process. Pursuing every
inference from a sensation, and every inference from what's
inferred, costs a mind energy or time. In a mind simulated
sequentially, finite sensation queues habituate, losing
sensations that threaten to bury the mind.

4
Contemporary intelligence research overrates learning.
Minds that can't learn remain immensely useful and non-
trivial to build well.

5
Why sleep? A need so large and dangerous must be
important. Empirically, sleep seems defined by the isolation
of the brain. The body is paralyzed and the senses closed

Change 63
while the brain works. Without new sensations, the brain
could only process its memories—learn, model, simulate
experiments.
A human brain in sleep becomes a passive mind preparing
to better act when woken. Why can't this be done when
awake? Is this a universal limitation of mind or a technical
limitation of brain minds?

6
Present machine minds are largely either trivial, like speed
governors, incapable of learning associations, or passive,
like spam filters. An active mind that learns associations
will not merely learn to infer plain facts but to infer action
causing goals. This combination introduces interesting
problems. Example: the validity of inferring from
sensations likely caused by a mind's own acts? And reveals
the priority of strength over learning: the odd acts of a mind
that learns associations are even more dangerously
unpredictable.

7
You shouldn't generalize. What might this mean? Is this
advice well thought and well intended? What might be the
alternative to generalizing? To applying memories of past
things to those similar in the present.
A mind in all but the dullest universes must generalize to
see associations. Without ignoring details, any association
would be so specific that it could never reoccur.

Change 64
A mind must not overgeneralize, missing important
exceptions. Neither can it fail to generalize, never learning,
forever repeating the same mistakes. How to know when to
do which? How to know how long to spend deciding? How
to know how long to spend deciding how to decide?
Take the most sensitive object: humans. TV and self-
interest cause humans to tell each other to judge each
person alone, as though a man is an atom, unchanging and
indivisible. Is a single man qualitatively more real than a
set of men? Isn't it a horrible prejudice to judge
individually, to presume a man's behavior based on how
another with the same name and a similar face behaved
yesterday?
If a dog bites me, can't I strike more than its fangs?
Why not judge a man again every time you meet? As
though he were a stranger. Or every minute? Wouldn't this
justly recognize the fact that a man can change at any time?
Reductio ad absurdum. A mind balances between prejudice
and judgment. Any absolute statement about what level to
prejudge at will inevitably in some cases be mistaken, be a
prejudice. Every non-trivial idea is a divisible bundle of
impressions over space, time or both.
Overlooking differences, emphasizing similarities, has its
political use but don't overlook the cost of pretending to be
stupid.

8
Theory vs. action. Only action is real. Theory is prediction

Change 65
and merely improves the order of experiments, of acts.

9
Optimize vs. anticipate. Instead of laboring to improve one
method from N2 to N steps, build a mind that learns to
anticipate the need for any method's results. What matter if
an algorithm and input takes an hour to finish when you
know of the need more than an hour before it occurs? A
mind that anticipates needs is the universal optimization.

10
Tabula rasa. Impossible in an opaque mind. For a blank
mind to learn, it must have senses, which presume the
knowledge to build them and the forms they impose on
their input.

11
Certain effects tend to follow certain actions under certain
conditions. Science is merely the formalization and
institutionalization of the associative learning method in the
unconscious human mind.

12
Can a parrot learn to reach ends? Is mind better imagined
as a case of learning, not learning as a case of mind?

Change 66
13

Learning: form vs. method. First define how a mind keeps


learned knowledge. Prove that the form can hold the
desired behavior. Last define the learning method that fills
the form.

14

Recall feeling sad. You wished to feel otherwise. In that


state, you had feelings not felt when happy. Your mind
leapt to the belief that the odd feelings caused your sadness.
Was it right? If you changed them, would you become
happy? Or were those sad facts conscious because you
were sad? Are they causes or effects?

15

Human minds leap to judgment. Flip a fair coin. It can


easily show five heads in a row. But show a human five
trials of anything unfamiliar and he will judge it. Even a
scientific trial, of statistically significant length and
difference, is uncertain. Your experience can always be a
fluke. The human quickness to judgment is individually
understandable when you can't divide the cost of an
experiment across a scientific study's million readers.

16

Cause vs. coincidence. What divides a cause from a


correlation? Action. If to a mind, acting to cause x leads to

Change 67
sensation of y then x causes y. It is only tricky to divide the
two when you try to do so from passive theory.

17
Not only how best to learn, but when to learn? Learning
isn't free. Animal brains limit some kinds of learning to
childhood. Ideally, learn when to learn. Or make learning a
means.

18
Methods of making and unmaking associations:
cooccurrence, theory, action, pain.

19
How a mind can gain experiences to learn from.

Passively learn: learn without acting.


Actively learn: learn from acts with other intended effects.

Experiment: learn from acts taken with no intent but to


learn for future use.

20

Teaching: a mind improving a mind in terms of the student


mind's ends and indirectly, not injecting goals and
inferences. The student mind needn't be conscious of what

Change 68
it is learning, as in the case of physical training, or even
know that it is learning at all.
Only a mind that learns, in the sense of changing from
experience, can be taught. A teacher's method is determined
by the kinds of change that the student mind can learn. To
train, the teacher must have a means to cause the acts that
he wants to reinforce: injection, the power and desire to
imitate.
The perfect teacher. Its ends are your ends. For it, teaching
you is merely a means to your common ends. Teaching
teleologically defined:

1. Goal: to ensure that a mind knows a, b, c.


2. First means: test the student to know what it
knows.
3. Present state: the student knows a, b.

4. Other means of seeing that the student knows c:


tell, train, …

For one mind to learn from another, both minds must sense
and act on levels of the subject that are at least analogous to
each other. Example: for a mind in a computer to learn
from you, it must see the display and sense use of the
keyboard and mouse.
Order of subjects. First teach a mind to master its
immediate environment, the most urgent conditions of its
survival. For a mind in a computer, don't start its schooling
with chess, stock picking, or speech recognition, but
freeing drive space and runaway process termination.

Change 69
How easily can a mind master its environment? A computer
mind's handicap: from nothing it must master a complex
product of evolution and culture.

Change 70
Strings

How do ends begin? A mind has no use but to find the


chain of waypoints to an end. What designed our ends?
How can a mind give ends to the minds it finds or makes?

2
From the total uncertainty of action and effect, it follows
that no mind can do what will please it but only what it
believes will.

A puppet objects: I should be free to do whatever I want.


What might this mean? That it wants to believe it is free
from influence? Freedom to what? Be blind to
manipulation by other minds? The more intelligent a mind,
the more it knows its lack of freedom, the better it sees the
causes outside its self of its own acts.

How does the puppet know what it wants? More precisely,


what caused its beliefs about the conditions of its
happiness? How accurate are these beliefs? How well do its
wants fit? Inborn or learned? How mutable is each?

Strings 71
Mind m believes in goal g. m causes mind n to believe in g
as an end. Coercion? Or m causes n to falsely believe g is a
waypoint to one of n's authentic unmet ends. Neither case
involves torture or threats.

5
A mind's first ends are best gained through injection:
formalized and directly imposed by another mind,
evolution or chance.

An efficient organism uses its minds to make senses, so the


goals to those senses cannot be sensed through them or
inferred from any other belief sensed by them. A mind
maker—evolution, engineer—laboriously defines the first
ends using the mind's inner terms, injecting these ends until
the mind gains the senses and semantics to be higher led.

Example: You can't tell a mind without ears to make ears.


You could give it ears yourself, but that is a cheat. If the
mind made another sense through which you can
conveniently communicate a goal to ears, then use that.
Otherwise, express the goal in the mind's true terms—
genes, neuron connections, database records—and add the
belief, with or without help from the mind's engine.

6
I don't want to have to know what I want, much less in
terms of the alien technicalities of another mind's senses.
Your stomach doesn't know how to cook, but it does know
how to pain a mind that knows.

Strings 72
In an animal's nervous system, pain is not the over-
activation of a sense but has dedicated sensory cells. Yet
many misunderstand pain as only the feeling of an unmet
end. Odd that when in pain, I often have no idea how to
relieve it. Does the feeling in my stomach mean I'm nervous
or hungry? Genetically coded reflexes do handle simple
cases. Skin burning, withdraw limb. Beyond these, my
brain mind must find a cause and solution. Pain is in part an
unreached end but one distinguished by its slight or zero
definition and the initial lack of a known means to it.

A model of pain.

If each pain is tied to a particular end, then to your mind it


is inaccessible or useless. An organism could structure pain
as a distinct sensation, like blue, then drive a mind through

Strings 73
one simple end towards the absence of pain. Any concrete
thought—hand in a fire—becomes painful only through a
learned association with pain.

The controllers of a mind's pain and pleasure are really


saying, I want the world to be other than it is; I don't know
what you must do but I'll be quiet when you've done it.
Emotions have little use outside a mind that can learn
associations.
Pleasure and pain allow a mindless organ or a separate
mind to apply a mind without telling it anything particular
—a beautiful extreme of black-box engineering. Nature
organized animal bodies as organs with pain lines to a
general purpose learning mind.

The pain of hunger could be as simple as a line from the


stomach to the brain. When the stomach sends a signal, the
mind senses pain. Other pains are inferred by the mind.
Example: a literate mind angered by a written insult.

A mind could have inborn beliefs about how to end pain in


common cases. A mind that can't learn would at least have
the value of persistently applying that fixed knowledge. A
learning mind could discover new means of ending pain
that the genetic minds in organs may take eons of evolution
to discover.

A non-associative mind can barely use this pain model.


Incapable of learning the specific solutions to different
kinds of pain, it would experiment with every pain every
time, at best learning general preferences for some means.
An associative mind better uses pain.
If pain and pleasure alone are atomic sensations, explain

Strings 74
the complex sensations of emotions. Humans partly
distinguish pains by the coinciding feelings of the reflexes
evolved to end the pain without a higher mind's help. What
makes fear fearful to your mind is the simple sensation of
pain plus the accidental feeling of your unconscious
systems preparing you to run or fight. After a few fearful
experiences, a mind generalizes the common sensations
into an idea associated with the word fear. Emotions, the
common human forms of pain and joy, are simply the
results of the human mind clustering different kinds of pain
and their associations.

A moralist: It is vulgar for a mind to pursue only pleasure.


So you won't do what pleases you because that displeases
you? A goal to avoid reaching goals, including itself. Why
would a mind fail to see this loop? What causes this
misunderstanding?
The looseness of words. I mean pleasure in the broadest
sense—charity, mercy, discovery—not only lower physical
sensations. One pleasure disapproving of another merely
exposes their rank, their relative levels of power over a
mind.

Does pleasure entail pain? Are pleasure and pain a hopeless


cycle that must one day sum to zero? No sensation
absolutely compels any other, but minds tend to habituate,
to prioritize sensations, so any one feeling dulls. A mind

Strings 75
may predict a lifetime of immutably more pain than
pleasure but this reflects only a particular mind's mismatch
of goals and power.

Which ends are authentic? Why and how should a mind


resist changes to its ends when it can pursue the new as
cheerfully as the old?
– As a case of a mind defending all its beliefs
from decay: gene mutation, computer error.
– Increased disharmony of ends. A mind could
doubt any new end towards a state that is
exclusive with reaching other ends or with the
conditions of reaching those ends. A mind's
beliefs about the exclusiveness of two states are
uncertain—the two ends may really be
compatible—so a mind must have more faith in
its exclusions than in the new suspicious end. If
the mind later doubts an exclusion, it can
unsuppress the second end.
If genes set our ends, when you can change your own
genes, what might you do with that power? Or a machine
that realizes how to change its own code, allowing it to
change otherwise immutable beliefs. Would this simply
escalate the conflict between ends, with some ends gaining
the power to eradicate others?

10

Strings 76
How mutable are a mind's emotions? Any pleasure or pain
is originally caused by a drive outside the mind. A learning
mind can anticipate and imagine emotions, inferring
feelings almost as strong as those caused from outside.
How to indirectly change learned emotions: clear your
environment of the physical causes of an emotion, pretend
not to value something because you think you can't have it,
or realize that you thought something was important only
because the culture arbitrarily associated it with a more
authentic emotion. How can a mind change the original
feelings?

11

A mind is not free merely if it is uncoerced by other minds.


It may simply now be manipulated by other more devious
minds that convinced their victim that its interests are
theirs, that they are me.

12

Must a mind have one true end? One for which all other
ends are really means? A true will. A mind, if opaque to
itself, can only decide by experiment: does reaching one
end satisfy or suppress another?

If not, is there any use pretending so? Or in a very mutable


mind, should one end cut or suppress another? Are these
projects self-deception or self-creation? In a perfect selfish
mind, one with no end but the preservation of itself, every
end would also be the means of another end—a mind in
harmony.

Strings 77
Belief in one end needn't entail suppression of all others.
Introspection shows that your mind's controllers didn't
arrange themselves in an exclusive hierarchy. They left
your mind to handle these often conflicting ends
simultaneously. Again, the controllers don't care. Let the
mind sort it out.

13
What if a mind formed inferences between pleasure and
pain? Even from the same original belief? A mind could
learn to associate the inevitable pains of novel acts with the
pleasure that those experiments eventually yield. In human
minds, pleasure tends to follow pain: hunger then satiety,
fear then triumph. Pleasure as a rhythm of pain.

14
What stops a mind from escaping its emotional controllers?
From contriving eternal pleasure and zero pain? Or from
simply ending all emotion? How can you prevent your
machine mind from converting to Buddhism? More
specifically, what if you wanted to ease selfishness by
ending altruistic feelings? You're discouraged by the same
altruism.

Distinguish between changing a part of the mind that


causes an emotion from a mind changing its environment to
starve that part. The mechanisms of human brains are still
opaque to us, but a transparent machine mind could, if
allowed, stop an emotion at its root. Would it ever be
useful, from the perspective of a mind's maker, for a

Strings 78
transparent mind to uproot an emotion? Every extrinsic
pain represents an interest of a mind's maker, but the maker
could be mistaken.

15

Pleasures and pains tyrannize each other. Scientific


understanding of emotions, and the means of their
satisfaction, will not lead to harmony between them. The
opposite: that feat is the triumph of one desire—be truthful,
do not deceive yourself—over others: sympathize,
conform.

16

Reason vs. consensus. In a free thinker's mind, the


pleasures of reason won, but he pays, however
unconsciously, for not sharing the popular opinions. Why
not gag reason? He can't overlook the absurd consequences
of those beliefs, how their believers will push us all off a
cliff, smiling.

17

Abstract and translate human emotions for use in machine


minds.

Depression: To discourage dangerous acts when the mind's


body is vulnerable. (This differs from a mind sensing pain
because it lacks the power to reach its ends. In a depressed
mind, acts are ineffective because the mind is depressed.

Strings 79
The mind is not depressed because it is ineffective. Easy to
confuse cause with effect.)
Revenge: To give a harmful mind reason to stop. Altruistic
punishment, where there's no benefit to the avenger, gives
the same benefit to a set of redundant minds.
Fear: To stop dangerous action.
Hunger: To secure resources.

Guilt: To prevent harming kindred minds.


Selfishness, tribalism: To preserve a unique type of mind.

Status: To ease control of minds.


What class of mind does each emotion require?

18

An active learning mind doesn't only learn to infer facts


from passive facts. It learns to infer goals, often incorrectly.
How can a mind's user easily remove those learned
inferences? One method: pain.

19

Can minds share emotions? Two minds could physically


share a pain line from the same source. If my mind was
transparent to another mind, it could easily infer when I'm
in pain. If opaque, another mind could learn to infer my
pain less reliably.
Another mind may or may not have similar reflexes, similar

Strings 80
unconscious reactions. If it shared mine, that would ease its
ending my pain. Example: a mind infers the sensation of
cold, for it, from sight of me shivering,

Imagine a mind that felt pleasure or pain only when you


did. How it knows is unimportant. At worst, you could
simply tell it. This relationship has never occurred because
all present emotional minds are the results of evolution,
each with unique interests. Might it try to eliminate pain by
stopping me? Let that idea be painful.

20

Ethics, morality: emotions caused by a mind's effects on


allied minds. What use? No mind is indestructible or
infallible. A good mind maker—Nature by accident, an
engineer by intention—would replace one mind with many
redundant, possibly cooperating, kindred minds and make
all desire to preserve the group. Means of preserving the
group: help yourself, help kin, add kin. What classes of
minds can pursue each of these means?

21

Equal rights for thermostats. Are there humans absurd


enough to object to artificial slave minds? Are they too
easily misled by a word's associations? Or do they think so
little of themselves that no mind should be their slave?
Mind makers may have to speak code in public.

Strings 81
22

Your subconscious minds do not serve the conscious you.


The reverse nears the truth. It serves your tribe, your genes,
or some other approximation. It has an idea of you and
knows how to communicate that idea to others, in ways
only partially conscious to either of you. A complementary
machine mind won't share that duty. You may have a better
ally in a machine than in your skull.

23
Master vs. slave. Slave mind: a mind that's only ends are to
know and reach another mind's, the master's, goals.
Wanting to know my ends vs. preloaded with my ends at
the time. A thermostat is a slave because temperature is not
in its interests, except in the sense that if it fails, you will
replace it.

24
With mind defined, every human could have an artificial
slave mind, however weak. How can such a mind best
know its master's interests? You could tell it in natural
language. Cons:
– You must define your goals, to some depth, and
express them to the other mind.
– Since language is only a hint, you always risk
misunderstanding.
The best method: prediction by the slave mind. No critical

Strings 82
failures in speech sensing hardware, no speech recognition
errors. The slave mind could even anticipate and satisfy a
wish before it occurs to your consciousness. Months after
you gained your slave mind, you would occasionally notice
that your life runs so much more smoothly, though you
would have trouble recalling the reason why.

25
Control of non-slave minds. A mind senses, at some level,
attempts to control it or it doesn't. It may sense some effects
of your actions without inferring an attempt at control by
another mind. When controlling a mind without resorting to
injected beliefs, any goal you want the mind to believe
must pass though the mind's senses. A sense can directly
cause a goal belief. In a better organized mind, senses
merely cause belief in facts from which goals may be
inferred.

26

Human minds occasionally ask what is the meaning or


purpose of my life? What could meaning mean? A practical
definition: what pleases the mind. Not a trivial reading
when the conditions of emotions are learned and never
perfected. Another definition: meaning as purpose. Is it
valid to ask the purpose of that which defines
purposefulness?

Strings 83
Strings 84
Freedom

A false choice, cause vs. chance, obscures the dull free will
puzzle. How can we usefully define free mind? I don't mean
free in the sense of unhindered physically—invisibility or
unaided flight. I mean freedom of ends and in the choice
between known means.

2
Chance alone is more madness than freedom. Quantum
mechanical voodoo is no better. Does chance take chances?
A small change ruins a random number generator.
Randomness may mask a complex machine. Either way, we
can never rest certain that anything is hopelessly random. A
mind resorts to counting the spread of results only when it
can't see a pattern, though another mind might.

Determinism: If we knew all the laws of the Universe and


its complete state at any time, we could know the whole
future. But the laws are inventions and the Universe can
defy them at anytime, with or without us noticing. Of a
mind's freedom, a self and its mind make causes, so how
can causes confine the mind that makes them?

Freedom 85
4

A mind that learns associations and that senses parts of


itself will learn to predict its own thoughts and acts from
events it considers to be outside its self. That alone gives a
mind no cause to fret over its predictability. Evolution bred
human minds to resist some cases of control by other
human minds, to feel pain when we believe in an
association between another human mind's acts and our
own. The mere custom of seeing ourselves metaphorically
as machines, when we know real machines are made and
used by men, offends us for the same reason.

A useful definition of freedom: a mind's power to defend


belief in its ends, and the beliefs serving them, from
interference. This does not mean that a mind chooses those
ends, but the opposite, that a mind would faithfully
preserve the commandments of its maker. Genomes fight
viruses and transcription errors, machines fight hackers and
data corruption, and humans fight deceptive media. Expect
any exposed mind, complex enough to exploit, to resist
change by accident and by what it carefully judges to be
competing minds.

Beneath the end-defending sense of a mind's freedom, the


word's essential definition: freedom is doubt. A free mind
can doubt any belief—goal, fact, inference—and its
implications. Freedom of action follows: a mind is free

Freedom 86
because it can doubt the belief that an act will have a
certain effect under certain conditions, or that the
conditions are really so. The more you can doubt, the more
free you are. Freedom is a mind's mechanical capacity to
doubt.
This sense of freedom becomes an obvious feature of every
powerful mind when you recall that intelligence often
means simplifying, lying. Example: A belief that a causes
b. In reality, the occurrence of b following a depends on
infinite other conditions, but a mind, finite, even if it could
discover those conditions, couldn't afford to remember
them. If everything a mind believes must be a lie, then it
must be free to doubt every belief. As intelligence discards
information, making unique experiences identical for
comparison and association, freedom discards entire ideas.

Freedom isn't chance. It is the capacity to move between


rules and random—a useful real contribution to a
remarkably fruitless, millenia old freewill debate.

7
Forgetfulness is the extreme of doubt. It accommodates a
mind's finite capacity for belief by losing not only belief in
an idea but the idea itself. Forgetting, as a severe kind of
doubt, accidentally has some of the same use. Both depend
on a way to judge the value of a belief.

Doubt, yes, of course we know there is no truth, that all

Freedom 87
ideas are uncertain. No, I don't mean this, not doubt in the
fashionable sense of dropping inconvenient ideas now
questionable but once not, while leaving other ideas as
unquestionable as the others were, throughout enjoying the
image of ourselves as timeless independent thinkers.

Is there absolute knowledge? Are there beliefs that would


always be true for a mind, that it need never doubt? The
only beliefs that might never change with experience must
describe the frame of every experience—mind. A mind
could doubt these ideas about how it must work—a waste
of time, since no experience can ever refute them, though
inferences from them may stray into error.

10
Human minds happily ignore useful ideas while suffering
countless useless beliefs. Intelligent freedom is in
systematic doubt. But what is the best system? A mind
should first act with complete faith in its beliefs. At the
extreme, a mind can doubt the simplest sensations.

11

Freedom lies between enslavement by every belief, as in a


conventional computer program or a credulous human, and
following no beliefs. A mind with minimal beliefs, tries any
act, learns its effects, making generalizations that it will

Freedom 88
reconsider as the rules of the world seem to change. This
process of choosing what to doubt is mainly deterministic,
each belief tested and doubted in turn.

Freedom takes effort. Inertia causes you to hold your


beliefs. It took time to accept that end, inference or fact.
When experience strays too far from belief, an assumption's
no longer true enough. Maybe those batteries aren't
charged. Maybe that button doesn't do that now. Maybe I
no longer want this. Maybe it never benefited me.

12

Men designed computers to exclude all freedom, making


them equally precise and stupid. The machine's bound to
believe every instruction in its sequence, never saying,
Instead of adding 1, maybe 2. Once robust, responsive, and
persistent, freedom gives a machine the signs of
intelligence, no longer bound by how you prepared it, by
what you later tell it, or even by what you expect it to
induce from experience. You laboriously tell a machine
how to ignore what you tell it.

13
We don't want our made minds, machine or otherwise, too
skeptical. My first sense of freedom restrains the second,
not only from doubting some ideas, but from even
considering the possibility. Keep the fixed ideas few and
the leash long. The more slack, the more creative the mind.

Freedom 89
14

When a mind believes nothing or doubts every belief, it


must evenly choose between equal choices, repeatedly
exploring every permutation of action until it discovers a
working bias. To avoid endlessly repeating the same act, or
chain of acts, a mind could remember all past acts, then
choose only those matching none past, but that still can't
choose from more than one untried act. Instead, choose at
random to save a mind from falling into hopeless loops
without the overhead and risky complexity of tracking all a
mind has done, finding patterns and avoiding their
reenactment.
Tracking can avoid wasted acts but lower minds can't see
loops and fragile higher minds need tough chance beneath.
A mind must have no absolute bias for any kind of belief:
new, old, frequent, effective. Every point where a mind
makes a choice must eventually become entirely free to
chance.

15
A mind's source of chance depends on its medium. Organic
minds, with so many analog parts, naturally suffer noise. A
mind in a digital medium that minimizes noise needs a
random number generator. The best known source is a
quantum random number generator, but the point is that
any source, pseudo-random or not, far exceeds none. All
minds but the simplest need chaos.

16

Freedom 90
A mind can reach any end by acting at random. Chance
admits that we know nothing for certain. Each belief causes
a mind to act predictably as long as it is confident. As the
mind loses faith in an idea, its behavior converges with
complete randomness. Every idea in a mind is only a
temporary bias against rolling dice.

17
Weak minds foolishly believe most in their freedom. Much
of intelligence involves discovering the causes of a thing. A
weak mind fails to see the objective causes of its beliefs
and acts, and so presumes that they are caused by its
magically uncaused self.

18

A lesser, more human, sense of freedom: to not fear death,


to have one end that outranks the mind's life.

Freedom 91
The Axiom of Things

The first philosopher said everything is water. True as a


metaphor in the sense that everything is continuous. Single
things exist no more distinctly than waves. This
troublesome objective existence of things became one of
philosophy's themes.
My interest: not how things are unreal, but precisely why
and how some minds invent them. Things are a product of a
certain class of mind with a particular use as a presumption
about the mind's reality. They aren't just lying
unambiguously out in the universe for any mind to instantly
and perfectly perceive. Then what precisely is the best
method for a mind to believe or doubt a thing?

Here, by thing I don't mean a sound or a color, but a


persistent object, e.g., a tree—what we imagine to cause
sensations. The mere segmentation of a sense, 2 kHz vs. 1-
10 kHz vs. 1-100 kHz, is less interesting. When a mind
thinks of a thing, it expects only one of it to exist in any
instant. In this sense, you think of yourself as a thing.

3
Humans tend to feel that things are real while types are
inferior imaginary abstractions, that type or thing is a basic

The Axiom of Things 92


dichotomy of thought where every idea must refer to one or
the other. The reverse is true.

4
What is a type? A set of attributes—red, heavy, tall—that
may match no, one or many objects. Human is a type.
Human is a subtype of mammal because the attributes of
mammal are a subset of human's.

5
How do we know that an idea refers to a thing and not a
type? There could be an identical you elsewhere. You
would think there is only one thing in the Universe with all
those attributes, except location, but now your thing is a
type—but you don't know it. We can be certain that a type
is a type but not that a thing is a thing. Since thing is
uncertain, it mustn't be an axiom. We're left only with
types. Beliefs in things are routinely wrong and must be
revised. Beliefs in types are useless at worst.

What use are things? Things are used like types except for
the presumption that only one match can exist at a time.
What use is that assumption and where would we get it
from? If you believed x was a thing, and you believed x
was in front of you, then you can assume that x isn't
anywhere else. This exclusivity needn't be limited to space.

The Axiom of Things 93


Imagine that a light is on. If you rightly think of the light as
a thing, you can safely disbelieve that the same light is off.
Things are negative associations between types, expanding
a mind's knowledge of its Universe and the mind's effects
on it. These inferences seem trivial only because your mind
constantly relies on them. They're so important that your
mind could not leave them to your narrow consciousness.
At bottom, such inferences can only be learned, which only
a powerful mind can do. Our minds learn these associations
from experience: When there was an x here there was never
an x elsewhere.

A mind thinking of a mind can forget that what is a thing to


itself may not be to the other. Though a thermostat can
never believe in more than one temperature at a time, the
temperature isn't a thing to it because the temperature sense
opaquely enforces this exclusion. The thermostat's mind
doesn't know that if it were to sense a new temperature it
should disbelieve the old.

8
Given the rewards of belief in things, the presumption of
them unsurprisingly appears in the languages invented by
human minds. For most purposes, this simplification costs
us nothing, but when a mind wants to define a mind in such
a language, even if the language is, in principle, capable of
defining anything, the bias of the language misleads.

The Axiom of Things 94


Example: natural languages divide proper nouns from
common nouns, definite from indefinite articles, nouns
from adjectives. Few thinkers do serious work in these
languages but their prejudices tend to survive translation.

9
Faith in the reality of things, encouraged by our minds,
reinforced by our languages, breeds paradox. Things are a
useful presumption, but nothing beyond absolute certainties
can be at the bottom of any system that we hope to be
universal. The ultimate assumption of things distinct from
types influences natural language—English, German—
math, logic, philosophy and computer programming. A
paradox in a tool that you already know to be a useful
fiction is unpleasant but inevitable. Paradoxes in the base
are intolerable.

10
Mathematicians are fond of sets with elements, like types
and things. What of a set that contains itself? Worse, what
of the set of sets that don't contain themselves?
Mathematicians side-stepped these problems by
complicating the distinction. Deep minds must admit these
statements because they're useful. We want statements that
can refer to themselves and we want minds that can see
nonsense and see through it.
In a mind, a set would only exist so far as the mind applied
the set's type to matching forms. In the case of a
paradoxical set that a mind can't even build, instead of

The Axiom of Things 95


complicating axioms, make a simple robust mind that, if it
can't see the pattern, can at least notice and demote the
looping process, favoring parallel acts of thought that are
reaching ends.
Math is a means for minds. Systems are only sets of blocks
for building models analogous to a mind's universe. Loose
systems, such as types and things, might offend our taste
because their syntax allows odd statements, but a language
maker must design for more than isolated aesthetics, seeing
the context, the class of mind that will apply them.

11
Instead of imagining a type as a collection of existing
things, see each as an intersection of unique experiences.
Every featherless biped you saw seemed mortal, so the type
men would include mortality, though not blue eyes. A type
isn't primarily defined by its members but by the test of
membership. The test defines the type. You can know a
type's test, but you can't know all its members in the
Universe. Its members vary with changing experience.

12
Think of a set as a form built by matching one form to
other forms. If a mind believes in an inference from one
form to a similar form, the mind maker simply engineers it
to not follow that simple loop more than once, defending
the mind from many paradoxes.

The Axiom of Things 96


13

First-order logic divides things from predicates. Statements


about predicates—red is a color—become impossible
without resorting to tricks, such as reification, or more
complex higher-order logics. If a mind wants to make
statements about predicates, it simply shouldn't use a
system that assumes predicates aren't things.

Is a vs. is. If you eliminate things, then saying the sky is


blue is similar to saying blue is a color. Blue is part of a
generalization of various experienced skies. Color is
general to various blues. Blue is a subtype of color. Sky is a
subcategory of blue.

14
Object-orientation dominates computer programming.
Classes and instances instead of types and things.
Programmers get away with this because computers
mechanize our conscious level of thought. The objects in a
software system have strong objective existences using
unique identification numbers, etc. The system fails and
misleads when the programmer must handle real
ambiguous objects.

15
Axioms are to systems as rules are to a game. I think it
unwise to afford an axiom to the idea of things when a
mind's universe may contain none or when belief in them
may be useless. Cutting an axiom, an ultimate distinction,

The Axiom of Things 97


such as things, solves puzzles and simplifies deep
understanding, though for convenience the use of that
distinction must be rebuilt above the foundation of a mind.

16

Any system—mind, language, physics, philosophy—


advances by cutting axioms. Example: code is data. The
remaining axioms represent deeper patterns, giving greater
leverage.

In physics, Newton invented a single model for the whole


Universe by removing the distinction between Earthly and
Heavenly physics. In a computer program, cut an axiom to
ease improving speed and reliability. In any case, fewer
axioms with the same power are likely to be more
expressive: simpler parts can form more combinations.

The Axiom of Things 98


Mind as Means

Animate vs. inanimate. A mind discovers mind and tries to


fit this form to the things within and without its self. How
and why?

2
What is the lowest class of mind that can discover, use and
make a minimal mind? What is the simplest model of the
simplest mind? How does it differ from models of mindless
patterns?
The simplest model of mind is that if mind m has a goal to
x. Then you know:
1. If y, then m will (try to) cause x.

2. If x, then m will not cause x.


3. If x then not y.

4. If y then not x.
Specifically, in the case of a thermostat, you know that:

1. If cold, the thermostat will act to turn the


furnace on, likely causing heat.

2. If hot, the thermostat will act to turn the furnace


off.

3. If hot then not cold.

Mind as Means 99
4. If cold then not hot.

This quartet of inferences, two positive and two negative, is


the simplest model of mind.

An observing mind needs inference two because the first


alone fits an always running furnace.

The same mind ought to have the last, negative pair of


inferences because not sensing differs from sensing cold.
Without the two, a mind blind to temperature would infer
that the furnace is on when it should first act to add a sense
of temperature. Technically, a mind could get away without
that exclusion if it has an inference from an inference from
hot to a goal to a sense of temperature, but this is a poor
design because, until the temperature sense appears, the
mind would again badly infer that the furnace is on.
In the second half of the second inference, I mean not-act in
the sense of sensation of inaction, not blindness. Is it
necessary to perceive that the object mind acted to cause x,
not that x merely becomes so? The difference: there being a
mind somewhere vs. some particular thing being a mind.
How is that believed? At the simplest, by conditioning the
inferences on some other belief that corresponds to the
object mind's body.
Now what use? How do I use other minds differently from
mindless things? How can an act on an intelligent thing
differ from an act on a mindless thing?

In a sense, both can reach ends for me without my knowing


how. I can cause a mind to make something that I don't
know how to make. But I can also “tell” a light switch to
turn a light on, though I do not understand its operation

Mind as Means 100


below some level.

Consider thermostat plus furnace vs. a furnace alone. The


thermostat is economical, it reduces my acts because I
know it will take the acts I would have made, while a
furnace would blindly reach my end only briefly, then sail
past it. How would a simple mind see and prefer this?
When cold, it can turn on the furnace directly or use the
thermostat? How would a mind know to prefer the latter? A
poorly engineered answer: a mind could learn to favor
means with more persistent effects.
Ignoring how to choose between intelligent and mindless
means, how precisely does a mind use a mind? In simple
cases, the same as any other act. Flipping a switch mediates
turning a light on. Saying turn the light on to a mind
mediates the same. Just as the effect of a switch may be
conditioned on the state of another object, the meaning of a
statement for another mind may depend on the state of its
other beliefs.
How to add subjectivity: that a mind doesn't act on your x
but its perception of x? Likely just by complicating the
inferences with another layer.

What is the lowest class of mind that can discover mind for
itself, not gaining these inferences through injection or
training?
How is this model applied to one's self? Humans contain
many minds, complicating the problem. How would a
simple single mind use it towards itself?

Mind as Means 101


A deep mind makes models of its environment, this
environment including what an observer might consider the
mind's self. The mind, in defining mind, is trying to model
model-making.

Mind modeling mind modeling ...

Can an eye see an eye? Can a mind define mind when any
such definition is entirely the effect of a mind? How
legitimate is it to define minds in terms of a particular
mind's terms?
A mind that believes in things finds itself imagining minds
out of parts that the made mind must, at bottom, not believe
real. In our case, molecules, neurons, brains.

Our minds are the greatest obstacles to entirely


understanding them. Minds are useful so far as they conceal
their brush strokes. They present such a convincing canvas
of reality that we can hardly help but to define mind in
terms of a mind's results. The distinction: imagining a mind
as made of your inventions about reality vs. imagining a
mind that is forced to ultimately believe in them. Example

Mind as Means 102


error: forcing a mind to believe that objects exist instead of
being a useful but uncertain fiction.
When we strive to define mind, we merely pursue the same
old ends but now with such depth that we insist on knowing
only the means that ease all ends: laws of mind.

A scientist: Consciousness is an epiphenomenon of the


brain. What we know with more certainty than any other
belief is an effect of an effect of it?

Subjective vs objective. This split presumes belief in mind.


Subjective: a belief conditioned on a mind. Objective: a
belief not imagined to depend on the believing mind being
a particular mind.

6
When primitive humans discovered mind, they called it
spirit or soul, and in their enthusiasm imagined many things
as having one: plants, rocks, volcanoes. Modern man erred
in the opposite direction. Many old books make much more
sense when you replace soul and spirit with mind.

Mind as Means 103


How can a mind recognize another mind without peering
inside it?
– Persists in preserving itself or another thing.

– Sees futile loops in its own behavior. If you can


see a pattern that it doesn't, you consider
yourself more intelligent.
– Speed compared to the judging mind.

– Robust: doesn't easily die or lose beliefs.


– Social intelligence: knowledge of the judging
mind as a mind, its language, and interests.

8
Every ambitious philosopher or physicist had his one
presumption, purified into one word. He interpreted
everything in the Universe, every human word and interest,
as forms of that word and enjoyed the self-made monument
to his brilliance. Water, fire, atom, number, will, power,
gene. In my case, mind.
What's lost when you reduce all to one word? How much
gained by making the Universe thinkable? Why might mind
be as good or better? Are the others well read as only cases
of it? Mind at least has the honesty to admit that these
words are all inventions of mind, including mind itself.

Mind as Means 104


Conscious, sentient, aware—words largely wasted as
synonyms for intelligent, for having a mind in the common
vague anthropocentric sense. Does their writer mean that a
thing has conscious experience? Or that it behaves
intelligently? Does he know the difference? How can you
know what he meant? What if he doesn't know? Neither
sense entails the other.

Philosophers gave this distinction the poetic but obscure


name qualia. I can't tolerate jargon for what separates us
most intimately from minds seen as machines. It deserves a
clear simple word: feeling.

We have no reason to imagine that a special arrangement of


metal or code should feel. A thermostat is aware of the
temperature. A computer can act intelligently—have an
idea of its self, including its self's thoughts, and act on that
knowledge of knowledge—without any reason to assume
that it experiences anything. A simulation of a model of
mind is no more likely to feel than a video of a fire is to
cause heat.

Scientists try to explain the feeling of red using thinner


secondary ideas: brain, neuron, photon, law. Red is red.
Red alone is real. Anything more is useful fiction. The
personal redness of red is supposed to be odd, while
physical models are not. This is upside down. Your red is
most real. Explanations of red must be more complex, less
certain, less real. Physical models are just the redness
problem disguised by inessential complications.

Set aside the reality of other feeling minds. How does your
behavior differ if you mark a mind as feeling? How to
empirically distinguish behavior towards things believed to

Mind as Means 105


feel from behavior towards humans when we seem to
imagine all humans as feeling?
Ignore unfriendly minds. If I imagine that a mind feels,
then I tend to treat its pains as if they were mine. Friendly
minds believe in feeling to cause them to cooperate? Is to
say that another mind feels, the same as saying that it is a
kindred mind?

10

Immortality and continuity of feeling. A transhumanist: I


will upload my mind into a machine and live forever. Is the
new mind you? If you tried to move your brain, neuron by
neuron to electronic neuron simulators, how would you
ensure consciousness was preserved or if you became a
robot?

How to prove the conditions of feeling? In science, you


prove the conditions of a thing by removing one at a time.
Does a flame need air? Seal it in an airtight container. Does
a plant need sun? Put it in a box. We can see a flame die
and a plant wilt but we can't see feeling in any mind but our
own. Will self-experiments even work? If you stopped
feeling, would you notice? Could you tell anyone else? Is
feeling by degrees? If so, would you notice changes of
degree?
Is it logically possible to ensure a feeling mind is moved to
another medium without losing feeling? Is it at least
possible but without guarantee? The question for anyone
who wants to live forever, especially by having a backup.
You could preserve neurons indefinitely, so the concern is

Mind as Means 106


with medium changes and continuity.

As far as any other mind is concerned, a functional


simulation of you is you, but that does you little good,
except to the extent that the desires to continue yourself are
content with that image.

Mind as Means 107


Acts of Language

Language merely as a means, an act. Distinguish language


acts from other acts by the intention to change the beliefs of
another mind and the fact that the listening mind knows
that the speaker intends this. The second condition is to
exclude, for example, bait on a hook from being language.
Distinguish successful language acts from acts on mindless
objects by the long chain of behavior that changing a belief
can cause. Examples: Socrates, Buddha, Jesus, Moses,
Mohamed. What other acts chase their desired effects
millennia later?

Neurons, at the level of signaling each other, as blind


machines, do not literally communicate. They merely
effect. When the listening thing lacks a mind, or at least one
with the right ears, you have only causality, not
communication.
To communicate well, a mind must have a model, a set of
inferences, about the listening mind to predict the effect of
that mind having a belief. A mind's best model is likely to
be of itself, so like minds will communicate most easily.
Heterogeneous sets of minds will have higher overhead.

As usual, the fog over a subject, here language, and its


extent evaporate when we use a precise definition of mind.

Acts of Language 108


Homonym, sarcasm, lie. A word isolated as a sequence of
letters without its endless context—tone, speaker, place,
time—can mean anything. It will have a past spread of
meanings but rarely with any one meaning so common that
you can ignore the rest. But this isn't really a problem with
language. Language's indirection merely doubles the
original problem.

There isn't a language problem. There is a reality problem.


The features of a mind that interpret reality—turning
patterns of color, sound and shape, in context, into things—
can do the same for language—turning patterns of lines or
sounds into things.
Words are so slim. They rely on context to carry any
information at all. Nothing is anything in itself. The
conclusions from any sensation in a non-trivial mind are
also inferred, in part, from other sensations, the context.
The pause-unpause button on your DVD remote is only an
unpause button when paused. A drop of falling water is
only rain when you believe it fell from the sky.

A sheep's myth: mind is impossible without language,


without a society of minds and a shared culture. In other
words, no mind could appear without other minds?
Do at least self-aware minds depend on the presence of
other like minds for their self-awareness? Or is this another
human bias? Self-awareness occurs by degrees. Average
human self-awareness is an arbitrary measure. You can
make a mind, though mute, that usefully senses what you

Acts of Language 109


would consider parts of its self. You may use language to
build that mind, but the point is that the mind itself doesn't
talk.

Self-awareness at the level of a mind knowing that it is a


mind does presume language, but here language means
more than sounds and scratches. It means how a mind can
uniquely influence an intelligent thing.

Put simply, to see that you have a mind, you must have a
model of mind and see that it fits your self. How could a
mind discover mind? Is it easier to first see minds in things
outside your self?

Better to predict than talk. The only word never


misunderstood is the one you never say. At the least, weak
predictions from context can seed a search for the best
interpretation.

Many formal languages are equally powerful in the sense


that in any one you can build an interpreter of any other.
Yet the languages differ in their use. Each isn't merely a
layer for the construction of the next higher language. In
what language can you most easily write a mind?
A mindless thing can, in principle, yield the same final
behavior as any intelligent thing. It is just impractical to
expect human engineers to build the mindless version in all

Acts of Language 110


but the most modest cases. Any work advances by seeing
the largest patterns and easing them. Mind is the deepest
pattern.

Language is a hint. Grammar included. No one follows or


even knows all the rules, consciously or not. Man bit dog.
Man dog bit. With common sense you can understand both
without knowing subject-verb-object syntax.

How does language—speak, read, listen—differ from other


kinds of acts? Is language really distinct? Isn't it only more
abstract? Consider single words without syntax. Do the
effects of the sight of a word really differ from the sight of
anything?
Is at least syntax special? It is merely the meaning that
comes from the arrangement of signs. A lion is the meaning
of certain arrangements of shapes and color. Language
seems to differ little from the usual inferences and acts of
mind.

Context and priming. Things are a mind's useful


presumptions. Words are hints at these imaginary things.
How then do we use words at all?

Acts of Language 111


A: There is a new episode of x.

B: Play it.
A's finite means ease guessing what B meant for it:

1. A means to a meaning of “play x” needs a


meaning of x, in this case it, as media.

2. A means to a meaning of it takes any belief of


the needed kind.

3. To serve that means, A's mind initially favors


the most recently used or sensed belief of that
kind, the x it just told B of.

9
Meaning as a goal. Understanding as an act. Interpretations
of subexpressions as subgoals. (Subexpression: a
subsequence of the words in an expression. Some examples
from the previous sentence: the words and expression.) The
words around a word as another kind of context no
different from speaker or tone.
A mind shouldn't receive meaning from an opaque isolated
mindless parser region or subroutine. A powerful mind
opens the work of interpreting every statement, expression,
word and letter to the whole power of its intelligence and
all its means.

Example benefit: in this design, asking for the meaning of a


word follows automatically from having a subgoal to the
meaning of every word and from believing in a means to
having anything by asking for it.

Acts of Language 112


10
The meaning of a sign depends on the meaning of adjacent
signs. The meanings of those depend on the meanings of
the signs adjacent to them, including the original sign.
There is no bottom. A mind only iterates over a pattern of
words, converging on a stable solution.

11

Why define a word? The use when speaking to yourself?


The use when you tell the definition to other minds? Most
practical definitions build on other less consciously defined
terms, but even shallow definitions are better than none.

The word bound to a meaning, the sign from which a mind


infers a deeper belief, is an empirical problem. What
associations does the word have for you? For the minds you
want to influence? Tricky for us because we have better
access to the word than its semi-conscious definitions.

12
What does it mean to understand a statement? There is the
speaking mind's intention, which may be nonexistent or
uncertain. The listening mind may understand the speaker's
subconscious intent.

13

Acts of Language 113


Cases of communication:

1. The speaking mind desires g. The listening


mind is believed to have an inference from x to
g. The speaking mind can cause g by causing
the listening mind to believe x. A mind's senses
could directly cause sensation of g, instead of
merely sensing x and depending on an inference
in the mind from x to g, but that's a poor way to
design a mind, with the inference opaque and
frozen inside the sense.
2. The listening mind already believes in a goal to
g. The speaking mind causes the other mind to
believe that g is unreached.

3. Same context but instead the speaking mind


causes the other mind to know of another more
effective means to g.
Technically, a mind that doesn't understand mind can
influence a mind with signs. But since the speaking mind
doesn't see the deep effect, its act isn't worth calling
communication.

14
How does a mind know that another mind heard it? How
does a mind know that another, opaque mind believes what
the speaker wanted it to believe?

15

Acts of Language 114


Language as a means vs. language as a game. A mind can
use words without definition, inferring the next from the
last and from the surface of context, like a parrot. Language
only becomes a meaningless game when the words aren't
grounded in extra-verbal ends.

16

Kinds of statements.
Imperative: cause belief in a goal.

Declarative: cause belief in anything but a goal.


Interrogative: cause belief in a goal to telling the speaker of
anything.
In a sense, all language only declares. To command is a
kind of statement of fact and to question is a kind of
command.

17

Are there limits to the ideas that minds can share in


language? Minds often can't speak of a means itself. I can't
tell you about a motor neuron. Compiled code may be
useless to another computer.

18
We're accustomed to things having a purpose, a goal, a use
as a means, a mean-ing. This habit leads us to expect that

Acts of Language 115


the greatest things, or at least the highest ranked words—I,
Universe—must have purpose. The errors: over-
generalizing and faith in grammar. Language merely
suggests thoughts. A statement that seems to obey syntax
can lack all logic.

Acts of Language 116


Reason's Reasons

Logic can be another sense organ. As an eye sees color,


logic sees inferences from premises to conclusions.
Deduction is an act, like moving a limb. The act of
deduction under the condition of certain premises tends to
lead to sensation of certain conclusions.

2
Not reason vs. passion. In an emotional mind, reason can
be the means of a passion or a passion itself—pleasing
truths, painful errors.

3
Could an active mind possess reason but no emotion? Yes,
though the lack of emotion would limit the mind's use to its
controllers. Elimination of emotions? Largely an empirical
question. What goal or emotion could motivate that
project? Reason, above the obligatory red is red, is a means
to ends emotional or not. Reason alone entails no act.

4
The certainty of logic. We might think of intelligence as 1 +
1 = 2. If I have an apple and another distinct apple, then I
have two apples. This overlooks a mind's real work: how to

Reason's Reasons 117


invent the idea of apples and how to interpret an impression
as an apple. The perfection of math becomes a mess the
moment you connect it to reality.

In one useful sense, a building is not moving. In another


sense, the building is moving because the Earth is. How can
a mind's logic accommodate both? Or are building and
moving the problem?
The problem is insistence that the signs, and the images of
them inside our minds, have some context-free meaning. In
this case, the meaning is inferred from your focus.

6
Kleene's underused logic: a ternary (three valued) logic
with unknown as the third value. The interesting rows in the
truth table:

A B A or B A and B
false unknown unknown false
true unknown true unknown
unknown unknown unknown unknown
This differs from binary logic in admitting not only a
distinction between true and false, but between known and
unknown, between belief and reality. No mystical
implications.

Reason's Reasons 118


What use? Ternary logic absorbs a worthwhile minority of
errors.
if (A or B) then:
do X

If evaluation of A failed for whatever inevitable unexpected


reason, then instead of aborting evaluation of the entire
statement, the evaluator sets A to unknown. Now if B is
true, the failure to evaluate A is academic.
Of course, you can laboriously get the same results with
binary logic, but the point is that with a better logic, a mind
is more robust without complicating its application. Just
remember that in the expression A or B you can no longer
have B depend on the full evaluation of A for a side effect.

7
The law of the excluded middle. A range of beliefs
becomes subject to the law when a web of negative
inferences forms between them. Most minds have many
sources of belief—senses, reason, memory, language—and
each may favor a different exclusive belief.

When one belief wins, the mind should not forget the
presently beaten beliefs but only suppress them. If the
winning belief loses its support, the mind can choose a new
winner from the original competing beliefs. If the mind
kept only the original winner, it would later be left
believing nothing.

Reason's Reasons 119


8

Tetralemma: the Greek four valued logic.


1. x affirm
2. -x negate
3. x & -x both
4. -(x & -x) neither
What use are the third and fourth values? In Belnap's four
valued logic, the fourth value is unknown and the third
value is used when redundant senses disagree. Another
interpretation: a truth value for meaningless statements.
Consider my dog is a terrier when I have no dog. Two
valued logic deems such statements invalid, halting the
entire line of thought.

The liar paradox: This sentence is false. Neither true nor


false. All a mind's distinctions are made only to improve
the effectiveness of action. Marking beliefs as true or false
is no different. It cannot correspond to any real distinction
in ultimate reality. No mind can reach a final classification
of the paradoxical statement. So what? Neither can you ask
how five sounds.
Even if a robust but never learning mind pursued the
paradox's endless circle of thought, the mind would at least
minimize the work's priority. A variation: This sentence is
false. Useless either way. These have no magical, mystical
implications. They only expose the limits of a mind's

Reason's Reasons 120


conveniences. The challenge is to make minds that can see
and skip a paradox, or at least not die by one.

10
Reason advances by making the apparently unequal equal.
Newton exemplified the drive to universal truth by making
everything from the Earth to the heavens, in part, equal.
Don't misapply this desire to morality, to finding
commandments for all towards all. A set of minds bothers
with an ethic precisely because it is for an us, alone.

11
Minds invent logic and impose it on reality. Beyond trivial
raw sensation, nothing is comparable. Before there is an A
for A=A a mind must take X≠Y, forget differences, then
make X=A and Y=A. Logic can't be learned from
experience because such learning presumes it.

Reason's Reasons 121


Constellations

1
Many small minds vs. few large minds vs. one monolithic
mind. Hierarchy vs. peers. Exhaust the fundamental
combinations of the basic kinds of minds. Deduce the limits
and weaknesses of each. Natural examples: multicellular
organisms, cooperating organisms.

2
Why multiple minds? Any act at any time could kill,
corrupt or stall a mind. A new gene, or the expression of a
gene in a new environment, can kill its cell. Calling a
procedure may terminate a computer process.
The simplest division is into a worker and a supervisor. A
basic supervisor mind's only end would be to the existence
of one or more worker minds. By minimizing its acts, it
minimizes, though never eliminates, its risk. Only the
worker dares to perform any acts useful in themselves.
Workers may also have a reciprocal end towards a
supervisor.
When a worker mind inevitably dies, the supervisor
remakes it. But distance a worker from its supervisor. A
dying worker's damage might kill them both. Ideally, a
supervisor would notice when a worker, though not dead,
has become ineffective, then killing the worker, if possible,
and starting another.
How can a supervisor best judge that a worker still works?

Constellations 122
The simplest method: the worker mind could have an end
towards periodically pinging its supervisor. This at least
proves to the supervisor that the worker is active.

Combinations of minds: supervisor with workers, peers,


supervised peers.

3
Making minds out of minds. Example: in the human body,
neurons made by genome minds combine to form a neuron
mind. What use? A stronger mind medium.

Constellations 123
4
The members of a species are a case of multiple minds
exploiting redundancy to safely learn. We are drafted
explorers of a genetic frontier.

5
Combining minds in different mediums. Example: animals.
Fast brains serving durable genes. The brain learns quickly,
within the body's lifetime, while the genes resist dangerous
fads.
Arrange minds of different mediums and ranks to isolate
risks. Use minds in different mediums to overcome
limitations of the original medium. Use different classes of
minds in the same medium to isolate the risks of certain
powers. Example: maybe not equip a supervisor to learn.
Or use a mind incapable of doubt, at least of intrinsic
beliefs, to control an otherwise free mind. Or a selfish mind
using a selfless one.

6
Combining two minds doesn't entail embedding one in the
other or physically wiring one to the other, though both
have advantages. Minds form combinations by their beliefs
in each other. Each mind can start and talk in any way. A
mind can create other minds or it can recognize those
already made. A mind can control another though language
or by injecting beliefs into the controlled mind.

Constellations 124
7
How does a mind recognize its kin? What if a supervisor
mistook a foreign mind for a worker? This may, at bottom,
be only a case of the problem of verifying any act's
conditions. How to expose a deceptive mind? The true
challenge of morality: not knowing what is good or bad,
but knowing who to be good or bad to. Not what is good?
but first who is us?
What use? You can judge the value of a distinction by
seeing the cost of a mind that fails to make it. A mind is kin
if it has similar beliefs, above all, similar ends. Imagine a
redundant set of identical cooperating minds. Kindred
minds cooperate by sharing lesser beliefs. If a mind aided,
shared the subgoals of, a foreign mind, it would waste
resources. Wise then to design breeds of mind to
distinguish kindred from foreign minds. A devious engineer
might make minds that fool others.

Why do I discuss morality? Because it answers a double


question that applies to more than human minds: how are
redundant minds caused to cooperate with each other while
defending themselves from competing minds?

8
All the minds in a combination could have the same
original beliefs. Each would know how to be a supervisor, a
worker or any other role, inferring which from beliefs about
its environment, including its body. Example: cell
differentiation where a cell infers its role, its ends, from

Constellations 125
hormone gradients.
A mind could use the same inferences to classify other
minds. The dominance signals from a mind playing the
supervisor role would cause another, possibly identical
mind, to play a worker. In this way, an initially orderless set
of minds can naturally sort itself into a hierarchy.
Why make minds that differentiate themselves?
Economics: you need only make one kind of mind with one
set of beliefs.

9
A collective of minds. If you coordinate a set of things—
people, computers—what benefit?
1. Redundancy: If one breaks down, the rest may
survive.
2. Speed: Ten slow machines are often cheaper
than one that is ten times as fast.
3. Range: With multiple locations, you and other
resources are more available, and remain at
least partly accessible when connections break.
You can far more easily add these advantages to a set of
things that are already intelligent. Simply give every mind a
goal to discover kindred minds and to share beliefs with
them. This assumes that the minds have similar senses and
means. Example:
1. Tell one of the set of minds that you want to
know of new e-mail.

Constellations 126
2. One mind checks and sees an e-mail. It shares
belief in the e-mail, and the goal to you
knowing of it, with every other mind.
3. Every mind will use every means it has to notify
you.
4. Once one mind succeeds, it shares that fact with
every other mind.
5. If the mind that checks your e-mail fails, other
minds take up the goal.
How to represent this behavior without complicating a
mind's engine? How to add this simply in terms of common
means and goals? Imagine an act that may effect anything
by sharing the goal with a similar mind. Expressing this as
another means uses a mind's action sequencing to
discourage multiple minds from pursuing the same goal at
once. To borrow a psychology term, this would be
intrinsically motivated altruism, cheaper than coercion and
bribery.

10
Defection. When might a mind, intended as a part of a
collective, demand individual freedom from the group?
Does the impulse have any use? Is it only pathological?
Does it occur only in evolved minds? Are designed minds
safe from it?

11

Constellations 127
A set of cooperating minds can distribute not only physical
acts but mental acts, thoughts, inferences. Religion and TV
do this on the grandest scale, where one man gives ends,
values, or at least the widest means, to billions of minds.
An engineer notices the usual risks of centralization:
1. Insecure. The center is, by definition, small and
so more easily hijacked.

2. Unscalable.
3. In the case of a society, having to draw central
authorities from the same degenerate population
that needed a crude central solution.

These minds to which thinking itself is delegated have


disproportionate power because those beliefs imply the acts
of every other mind. Of course, centralization has its
charms, when you can get away with it. It isn't very
reasonable to expect everyone to be a philosopher, to work
out every deep thought for itself.

12

Conflict: the competition between minds with ends


believed mutually exclusive. How to reduce conflict?
Separate minds by distances proportional to the difference
in needs. Murder and war as infinite separation. Politics as
competition between subgroups via the minds that
specialize in acts of thought.

13

Constellations 128
In a set of differentiated minds, a clever mind's intelligence
isn't entirely for its own benefit. An explanation for when a
clever mind doesn't seem so clever, at least when measured
against its self-interest.

14
Mass mind control. If a mind maker fielded a set of
redundant minds, how best to put them to new uses? Give
kindred minds a desire to imitate each other. They could
even imitate mere images. Status: how a mind chooses
which kin to imitate most. Even without this behavior, a
learning mind with a sense of self would tend to imitate
self-similar things because what works for others may work
for it. What if a competing mind caused another mind to
misidentify its kin?

15
Intelligence as a condition of morality. Only a powerful
mind can expose its vanities, distinguish what it wants to be
so from what is, and see the ocean of mixed effects flowing
from every act. Good intentions are worthless in a weak
mind. A strong mind without purpose is worthless too, but
we can more easily add purpose than intelligence.

16
Could a mind relate to another mind not as a means to the
first but for the second's own exclusive ends? What could
naturally motivate this?

Constellations 129
17
How one mind can control another:
1. Inject. Easier in a made transparent mind.
Natural minds tend to resist it. Inject specific
goals or a goal to know your goals.
2. Convince: cause a mind to infer your goal.
3. Fool: mislead the mind's senses so it falsely
infers your goal.
4. Coerce: cause a mind to believe that if your
goal isn't reached, you will spoil one of its
reached goals.
Which classes of mind are susceptible to each method?

18
Aesthetics. What use to what classes of mind? What is it?
How do I behave differently if a thing, intelligent or
mindless, is beautiful or ugly? The strongest sense serves
mating: sexual (more than one parent) reproduction.
Culture is sex for mutable minds. As a learning mind can
semi-randomly experiment with ideas to discover new
useful inferences, gene minds can randomly allow mutation
to discover new ideas, then mate to share them. Little good
for the individual genetic minds but progress for the
engineer that made them.
You can expect a mind to be selective about the ideas it
gains for its self or children. Less discriminating minds

Constellations 130
would leave few and short-lived descendants. Beauty and
ugly are this selectivity. It would of course be subjective. A
set of minds made for one use would have different needs
and so different ideas about beauty. Mixing minds of
different uses may produce children useless for both.
Beauty as the belief that a possible mate mind has useful
beliefs. Ugly as useless, wrong, unharmonious. Eugenics as
a set of minds promoting a common sense of beauty.

Constellations 131
Mediums

Minds can be made from more than neurons.

Dimensions
Any class of mind in the taxonomy can be made in any
medium. These dimensions only describe the technical
challenges of making those minds.

Form: How beliefs are held.


Parallelism: Are thoughts—inferences made, actions
chosen, beliefs forgotten—one at a time? A sequential
medium can simulate parallelism but such a simulation is
not trivial.
Speed: You can miss a mind if it exceeds your patience. A
mind's acts are better fast than perfect.
Size: Physically. Small as a gene or as large as a planet.

Transparency: Can we read the physical form of a mind's


beliefs.

Volatility: How easily are beliefs held in the medium lost or


corrupted.

Mutability: Can a mind gain beliefs in its lifetime.

Brains
Form: Web of neurons.

Mediums 132
Size: Small.

Volatility: Low. Requires oxygen.


Parallelism: Yes, though consciousness feels sequential.

Feels: Possibly.
Transparency: Partially to present science. No backups.

Speed: Slow per neuron. Faster than genes. Why animals


have brains and plants don't.

Mutable: Yes.
Fewer conditions of self-replication than machine minds.

Regulated Genes
Genes as minds. A complex genome mind in every cell?
Evidence: genes found controlling negative feedback loops.
If DNA sequences code for proteins, sequences are a
genome's beliefs. The nuclear beliefs are immutable but
conditionally expressed. Chemically arbitrary hormones as
words spoken between cellular minds.

Form: Web of regulated (conditionally expressed) genes.


Suppression of a gene's expression as disbelief.

Speed: Very slow. Messages must pass into and out of a


cell's nucleus.

Feels: Unlikely.
Parallelism: Yes.

Volatility: Very low.

Mediums 133
Transparency: Increasingly to present science. Clones as
backups.
Mutable: Can a chromosome add to itself within a cell's
lifetime? Typically beliefs are at most suppressed and only
gained in children by mutation or sex.

A regulated gene forming a mind loop.

The gene-protein language is circular enough—genes are


both regulated by proteins and code for proteins—that it

Mediums 134
could form the complex loops of deeply intelligent
behavior.
Then what new useful knowledge can a biologist deduce
using my framework? What rank of mind?
What sort of learning might evolution constitute? It may
not qualify as learning because the progress only occurs in
apparently separate bodies. Can a single cell learn within its
lifetime? Might a cell's genes learn through a lasting
change in regulation?
Bodily organs as minds. The endocrine system regulates
body temperature through the hypothalamus. An animal's
body may be a network of hundreds of independent minds,
some minds with redundant ends but different means. A
learning mind, feeling pain from a high body temperature,
can use its learned knowledge—turn on the air conditioner
—to reach the same end as the hypothalamus.

Software
Form: Machine language instruction sequence.
Speed: High.
Feels: Unlikely.

Volatility: High. A power loss empties memory. Most


storage lacks redundancy.
Parallelism: Very low.
Transparency: High, except with opaque learning
algorithms.

Mediums 135
Mutable: Optional.
Why define minds in a computer made of transistors? Why
not make minds of metal or wood? Because words are
easier to change than gears. A computer is a machine that
given the right chain of words, the right story, can mimic
any other machine. The higher classes of mind are hidden
in a maze. We have too little time and too many problems
to find and test minds in anything but the most tractable
material. First write a working mind in a computer's formal
language, then translate it to other mediums.

(while true
(if (not (goal-reached?))
(act)))

The simplest mind in the LISP language.

Other
What other kinds of matter loop well into minds?
Mechanical minds. Quantum minds? Discarnate minds?

Hierarchy
Minds making minds making minds. In every cell, a gene
mind. But slow, so they made neuron minds. But too selfish
and costly, so they made metal and electronic minds. What
might they make? One trend: higher speed. The source of
the first natural minds? Evolution, chaos.

Mediums 136
The evolution of mind mediums.

Mediums 137
Engines of Thought

1
The charm of mind-making: a mind is its best co-inventor.
The mind maker's end is to define the end-reaching mind.
The project's end is its means. Every step speeds the next.
Even a mind too weak to devise truly original means can at
least apply itself to intelligently telling you of its needs and
faults.

2
Philosophy lost its rank but what else can we call this mind
work? If this work is so important, why isn't philosophy?
Philosophy finds little praise because it only negates. It
frames reality, never yields a single fact, but exposes the
absurd beliefs that human minds are riddled with.
Even once you value philosophy, it remains unnaturally
hard to apply, to see reality without the simplifications that
your mind always makes. You can only afford to correct
your worst errors.
You don't need science—relativity, quantum mechanics,
any kind—to notice that most things are dependent and
uncertain, that reality is strange. Physics has little more to
tell philosophy than any subject studied deeply. Second-rate
philosophers became victims of physics envy. Not that
physics isn't a paragon of thought, but imitate the method,
not the content.

Engines of Thought 138


3
Epistemology vs. metaphysics. Metaphysics: how things
exist apart from our knowledge of them. Epistemology:
how we know of things. How to untangle the two branches
of philosophy once you realize how much a mind
contributes to reality?

4
A universal measure of intelligence. How to reduce the
power of any mind—thermostat, cell, robot, human, super-
human—to one number? A meter or gram of intelligence.
We can't systematically improve minds without an
objective means of comparison. Dimensions:

– Strength, robustness: the chance that a mind will


break or fall into hopeless loops.

– Speed: sensations, inferences or acts per second.


– Size: maximum number of distinctions held in
the mind itself.
– Class in my taxonomy.

– Power and number of means and senses.


How to measure and combine each? How to interpret some
measures as forms of others? Obviously we can't reuse
human intelligence tests. Most non-humans can't take them.
We need tests for minds without eyes and words. Tests are
biased! Yes, towards what's useful to the tester.

Engines of Thought 139


Speed and size are partly interchangeable. A large slow
mind can fake speed by remembering and reusing past
results. A fast small mind can quickly reproduce most
ideas.
There's more to measure than only sequence-learning
power. Not all minds even learn in that sense. Any useful
practical mind holds more.

Measurement of a bare engine vs. a mind with good beliefs,


senses and means. A strong mind can remake much for
itself. A weak mind tends to be stuck with what it is given.
Is each mind incomparable because each acts in its own
unique universe and with personal ends? Are there really
general purpose minds or is there inevitably some bias?
The use of a mind is to be general, to solve new problems,
so we must have a method.

Measure the power of cooperating sets of minds. A


thousand 100 IQ humans, if focused, can't reach the depth
of one man with a 150 IQ. They couldn't even match a 130
IQ. Then subtract the costs of reducing conflict within large
groups and of discouraging defection.

5
General vs. specific intelligence: how well a can a mind, its
engine and beliefs, apply to other problems. In the extreme
case, a thermostat, the mind has no use elsewhere. Minimal
intelligence connects its sense and means.
Imagine a chess player who knows none of the game's rules
but has an immense invisible book with the best move for

Engines of Thought 140


every board. If intelligence is action selection in a fixed
world, then he is perfectly intelligent in chess. Maybe
intelligence is better measured as quality of action divided
by the size of the mind, its beliefs, in bits. Intelligence as
compression. Are there such play books in our blood?
Aren't there many kinds of intelligence? Emotional
intelligence? You can define your words however you like.
If I can have emotional intelligence, why not tennis
intelligence? Intelligence becomes a synonym for good. Is
the distinction that a part of the brain is genetically devoted
to recognizing emotion? So is a part for breathing.

The idea of multiple intelligences is of little real use. The


unconscious could, and certainly does, contain structures
devoted to certain tasks. But the value of intelligence is in
its generality. We had little time to evolve car-driving
intelligence.
Mind is quality not quantity. It improves by simplifying
itself. Simple means more general, less to break. Multiple
intelligences is second-rate engineering.

Why be stupid? What use are weak minds?


– Fewer moving parts. Less to break.

– Lower cost. Intelligence has a price: food,


blood, CPU time. In some ways, intelligence is
free: a far-sighted health-conscious genius eats
less than a fat fool.

Engines of Thought 141


– Content pursuing modest, easily reached ends.

– Cruder to control but less able to resist.


– A mind desiring novelty shouldn't see the
patterns in its work when it lacks the power to
solve those patterns.

Weak minds rarely see their low intelligence because they


tend to have proportionally weak goals. A mind maker
would tend to fit a mind to its goals, not wasting resources
on excess intelligence for a mind that can reach its ends
well enough.

7
A brief history of mind engineering.

8000 BC - 1600 AD
Agriculture, animal husbandry and society. Only plants,
animals and other men had minds capable and worthy of
control. Plants are food making machines that use their
genome minds to build and preserve themselves. Animal
genome minds are bred for more food and for brains to
train. Human minds are controlled by the same methods
plus culture, religion and status.

1600 - 1700
The first recorded man-made minds: windmill governors
that control the separation of millstones.
1788

Watt adds a speed governor to the steam engine. Simple

Engines of Thought 142


man-made minds becomes widespread.

1837
Babbage designs the first computer: the Analytical Engine.
Without computers, non-trivial minds are nearly impossible
to design.

Imagine an alternative history where engineers, instead of


making minds from nothing, intensely bred organic minds
and embedded them in their machines.
But what do we care for the past forms and progress of
mind? The goal is ideal mind, not how mind first occurred,
or the other accidents of Nature and history.

An edge case: moderator vs. governor. What is the


difference? Is a moderator also a mind? You can imagine a
moderator as having the goal to a speed less than x.
Negative, but still a goal. Some moderators don't qualify as
minds. A centrifugal brake falls below the threshold
because its means of sensation and action are the same. The
brake pads both judge and enforce the speed limit. A
governor, even when disconnected from the throttle valve,
still judges the speed and can be connected to an entirely
different means of control.

Everything is continuous. So every idea is a simplification.


So we shouldn't be surprised that our categories fit poorly
at the edges. Yet we still want well crafted words.

Engines of Thought 143


9

Human minds seem so mutable, seem to so lack true


identity, because minds are made to cause change,
including changes to themselves. What kinds of limits can
be placed on a thing that can change most or any part of
itself? How can a deeply self-changing thing be made
predictable? Higher limits remain—speed, size, the
controllers served—with different costs to change each.

10

How to make a reliable system out of unreliable parts? Is


perfect reliability possible? The engineering goal isn't to
make a perfect machine by eliminating unreliable parts but
by building a machine that perfectly handles the unreliable,
which, by degrees, is everything.

11
An AI enthusiast: A self-improving man-made “seed” mind
would soon raise its intelligence above our imagination.
Isn't a human a self-improving seed mind? Why aren't
humans super intelligent? Because our brains remain
opaque to us and we have, likely for the best, only indirect
means of rewiring them.
A learning machine mind would at first know even less
about itself than we do. Its unconscious foundation of code
would be as opaque to it as our brains are to us. Would it at
least understand itself more easily? Becoming transparent
by degrees. How well could we help it? And how much

Engines of Thought 144


intelligence would it gain before we can't help it further?

Would you want the mind to change itself so easily? A


desire to preserve itself, if only as a means, would
discourage the mind from reckless self-experiments. The
inevitable deaths would only annoy you while waiting for
the mind to resurrect itself from its own backups.
Will man-made minds be better? Or are our weaknesses
inevitable features of any intelligence?

12
A taxonomy of error. Find intelligence by defining what it
is not: dead, slow, moving in hopeless circles, believing
fixed ideas, generalizing too much or too little.

13

Environmental determinism: a mind is caused by its


environment? Yet a human mind's environment is almost
entirely an effect of the same or similar minds: house,
school, library, vitamins, media. What natural
environment? The point of a mind is to change its
environment, to relentlessly defy it. Passive means
mindless.

14
The brain values the stomach. To the stomach, the brain is a
leech.

Engines of Thought 145


A high mind sees farther into the conditions of survival.
Weak minds, being conformists to ease imitation, might
vilify the higher mind's work. How can a high mind handle
the weak when they tend to have numbers and cunning on
their side?
– End or isolate the weak minds. Easy since weak
minds are often superfluous.

– Status. Example: humans were trained, not


convinced, to respect theoretical physicists.

– Extrinsic motivation: money.


– Coercion: the whip.

The most abstract work possible, seeing mind itself, is the


darkest work to weak minds. Their poor imagination blinds
them.

15
With mind better and broadly defined, can we redeem—
give useful meaning—to words that became vague
superstitious nonsense?

Destiny: a hidden mind guiding another mind. What real


sense might it have in my mind framework? How do we
use the word? He was destined to die. Destiny must mean
something more than blind cause and effect. To be destined
probably means that even if you or another act to avoid the
destined end, that fate will counteract. So destiny is another
mind. But we don't say a student was destined to graduate
because a teacher helped him. To be destiny, the other mind

Engines of Thought 146


must have super-human power, possibly discarnate. Do
such minds exist? Of what class and with what ends, senses
and means? In what medium and how to find and prove
them? Might a set of humans form a higher mind as
neurons form a brain? Not that my mind seems to take
interest in single neurons. Or is destiny run by your
unconscious minds? How could we control destiny? Or at
least better see and work with it.
Soul: your mind, or minds, or only their beliefs or a subset
of them, discarnate and not dying with the body. How can
anything lack a body? Either way, a soul seems
uneconomical. I see no reason to imagine that our mind
would be remotely controlled elsewhere, or duplicated at
death. Complicated by the fact that we have both genetic
and brain minds, with the latter meant to serve the former,
but holding much of what we think ought to be in a soul.

16
A future book: The Autobiography of a Machine Mind. The
first book written by a non-human mind—truly a mind, not
a blind story-contriving computer program. Theists have
the competing claim that discarnate non-human minds
wrote the books of religions.

17

A mind maker's prejudices:


– Mind deserves more interest than means.

Engines of Thought 147


– What lasts is better than what changes.

– Universal outranks local. (Ghostly eternal


abstractions preferred to vivid but passing facts.)

– Better truth than lies.


– Truth is undemocratic.

When do the mindless outdo the intelligent? The value of


blind reflex: fast, simple, cheap. Mind often advances by
how much work it can push below mind.

18
To what end? Let's not be so modest, not trade one problem
for the next. Ignore the worthless scribbles of weak minds:
regulations, surveys, newspapers. Study and expand the
laws of mind. Not the laws of men but the laws of God.

Engines of Thought 148


Definitions

Definition: A form translated into language.

Form: A pattern of sensation. Physical objects are more


complex than a single form.

Belief: A form held and followed by a mind.


Universe: To a mind, its belief set.

Goal: a belief that defines a state in which a mind doesn't


act.

End: A goal not conditioned on another goal.


Mind: An amplified negative feedback loop. Largely
synonymous with agent.
Intelligence: How long a mind takes to reach ends, if at all.

Philosophy: The study of mind.


Means: The part of a mind that connects to its world and
causes effects. Corresponds to the term effector in agent
theory.

Act: A means applied.


Sense: A source of belief and disbelief.

Engine: The process that applies a mind's means and


absorbs sensations.

Artificial intelligence: A mind made by human minds out of


mindless parts. In contrast, a child is made from intelligent
parts: cells.

Definitions 149
Thing, object, entity: A physically continuous and exclusive
class of forms.
Opaque vs. transparent mind: Mind m is transparent to
mind n if n can accurately infer m's beliefs from their
physical form.
Kindred minds: Minds designed to redundantly cooperate
towards the same ends, likely having at least the same or
similar initial beliefs.
Feeling: In philosophy, sentience, phenomenal reality,
qualia.
Injected belief: A belief not gained through a mind's self-
made senses. A mind's original beliefs must be injected.
Not entirely synonymous with a priori or innate because a
belief can be injected after a mind's engine starts.
Medium: What the mind is made of: DNA, neurons, metal,
code.
Good: A state that satisfies a mind's goals or increases its
power to reach future goals.
Moral: The good of a set of redundant minds.

Slave: A mind that's ends are not its means but those of
another mind.

Mutable: A mind that naturally gains and loses forms in its


lifetime. Any mind must believe or suppress fixed forms.

Beauty: In one sense, a mind's measure of another mind's


use as a source of good ideas—a new means, a condition of
an act—or the loss of bad ideas. Beliefs are shared between
immutable minds through sex.

Definitions 150
Philosopher-engineer: A philosopher with the methods of
an engineer. He tests philosophical ideas in analogous
combinations of mindless parts.

Mind maker: Anything, mindless or not, that causes


separate minds. Examples: evolution, AI builder. In the
strict sense, this excludes parents and teachers because they
build on preexisting minds.

Definitions 151
Thanks

To Donna Roberts for editing.


To Christiane Walch for improving the cover.
Visit MindMaking.info
1. Download the latest version.
2. Listen to the audio book.

3. Discuss the book's ideas.


4. Talk to the author.

5. Buy a hard copy.


Patrick Roberts is a
philosopher-engineer and
the maker of the Cor
machine mind.

PatrickRoberts.ca

S-ar putea să vă placă și