Documente Academic
Documente Profesional
Documente Cultură
PATRICK ROBERTS
CorMind.net
Version 20100112
These laws of mind are all that can be true for everyone,
everywhere, forever. They can't be false because they made
truth. Always true, you need never doubt them. In your
mind, they are the last possessions you can lose. By
comparison, all other knowledge is trivia.
Guide 8
Knots
Knots 9
word. The whole system would remain as useful. But by
taking mind, I gain the foggy associations of a word with
history while adding a precise sense.
Knots 10
the form of water against gravity but it has no means to
sense its height or even that there's any water, much a less a
means to change the flow.
3
The goal: to know how not to know, to find the crowning
knowledge that raises us above memorizing trivia.
Knots 11
Don't strive to understand the countless passive things:
physics, chemistry, mindless machines and tools. A thing is
useful so far as you don't have to know how it works. Study
minds, the kind of things that can make and mind the
passive things for you.
We want autonomous, self-governed objects, not fragile
reflexive machines that fail daily and demand unblinking
attention. To complete great work, recruit, make and
improve other minds, within you and without, natural and
artificial.
4
Cars don't have legs. Planes don't flap their wings. Studying
the brain may be a long path to intelligence. In either case,
we must understand more kinds of mind than human.
All life, from single cells to plants and animals, must have
one or more minds that build and preserve the organism's
self. Mind making is the Frankenstein project, making
minds from mindless parts, life from lifeless parts.
Knots 12
ease making other tools: die casts, programming languages
(compilers are programs that write programs), minds.
1. Maker: build or improve mindless things.
7
Higher than a universal law of gravity, the universal laws of
the law-making mind. Then why do non-physicists read A
Brief History of Time or any popular cosmology book? For
false sensations of ultimate knowledge.
Religions once combined philosophy and cosmology.
Science now rules the second but the association between
the two persists unconsciously. The models of cosmology
have no use to casual readers and no lasting value because
scientists will soon find different metaphors. At best you
have the academic aesthetic pleasure of studying brilliant
solutions.
Knots 13
application. Simply that if an experience can be, at some
level, well modeled as the behavior of one or more minds,
in my sense, then my system of mind gives the deepest
framework. The usual use of theory: ideas to be applied by
specialists in ways unimagined by the theoretician.
10
Knots 14
Machine mind m outperformed mind n in a
statistically significant set of tests. n's assumptions
about reality are wrong. m's are right and are
complete because m contains no minds but those we
made.
A new profession: philosopher-engineer. The methods of an
engineer with the goals of philosophy.
11
Knots 15
controlling others than in controlling the immense
remainder of the Universe. Easier to mislead existing minds
than to shape a mind from mindless matter.
12
13
Methods outrank truths. Few truths are really such. A
means to truth will outlast most of its results. In other
words, a method for uncovering useful facts will remain
useful longer than any one of the facts it made.
You advance the study of mind not by asking whether any
idea is true or real or any other concern with objective
being, but by asking what use is such a distinction to a
mind?
A possible Copernican shift, testing the movement of mind
to the center of our systems. A shift away from matter, but
not merely back to ideas. Instead, to mind, the cause and
use of ideas, matter, and things—tools then interpreted as
merely a mind's oldest and greatest inventions.
Knots 16
A return to idealism in the philosophical sense that you
cannot disentangle reality from mind, but now with a
purely material definition of that mind.
14
Knots 17
interprets natural language—English, French—largely with
its unconscious, often using terms so opaque that no
conscious definition is practical.
15
16
No problem is ever really solved. A particular problem
occurs in the past, and, like anything, will never entirely
reoccur. We invent a class of problems, a class that would
have contained the original problem. Then we plan
prevention of any problem in that class. The solution's
value is proportional to the number of problems likely
prevented, divided by the cost of the solution.
Knots 18
ourselves with crude solutions to urgent problems.
Not that any problem is fully known. A mind can only build
a model good enough that its time is then better spent
improving another model or a model of models.
17
For those who somehow think that increasing computer
speeds will ease defining mind. More speed is as likely to
find mind, much less one in a form clear to us, than
manufacturing more typewriters and breeding more
monkeys will speed the reproduction of Hamlet.
18
Mind is like breathing: physically complicated,
superficially trivial, and too important to trust you with.
Awareness of a thing is often a symptom of disease. Mind
Knots 19
works well so far as you're oblivious to it. Hence my
tortured language, having to tease or drag into sight
naturally buried assumptions.
19
20
Knots 20
8. Minds die, so have more than one.
21
The goal isn't only to know ideal mind but the common
possibilities of working minds.
Knots 21
22
Knots 22
science. Philosophy became a con-man: make grand
claims, pocket funding, have little to show, then change its
name to conceal an again ruined reputation.
23
Knots 23
The Taxonomy of Minds
What use? With an existing mind, see its kind then deduce
the mind's powers, limits and means of control. When
building or improving a mind, the taxonomy exposes
prerequisites of common kinds of intelligence.
The lowest class of mind: one end, one binary (having two
states: true or false, 1 or 0) sense, one means with only one
intended effect. Example: a thermostat.
Classes of Mind
The parenthetical letters offer shorthands for basic mind
classes. Example: L-mind for a mind that learns from
experience.
Mind
Mutate (M)
Can a mind naturally gain and lose new ideas in its
lifetime? Counter-example: a thermostat can only believe
one fixed idea of temperature.
Doubt (D)
Learn (L)
Does the mind's behavior change from experience? Does it
learn associations? (LA-mind)
Feel (F)
We imagine that an equally intelligent machine would lack
our conscious experience. Examples: yourself, presumably
other humans.
Communicate (C)
3
Essential beliefs: ends and means. In higher minds,
inferences. Each must fit into itself and every other, free to
form the endless loops and spirals of deeply intelligent
behavior.
Formalities 29
A case of the value of recursion, of applying powerful ideas
to themselves:
– Find the patterns. Find the patterns in the
patterns. Learn how to find patterns. Learn how
to learn. Find the patterns of learning. Search
for patterns. Search for search methods.
– Write code that writes code.
– Judge the value of your values.
– Define the process of defining processes.
– A replicator that can replicate itself.
– Invent a machine that invents.
– The evolution of evolvablilty.
Example forms.
4
An idle proof:
Formalities 30
1. Assumed fact: Every thing is unique. (Not
necessarily true for a trivial mind or for any
mind at low levels of sensation.)
Formalities 31
6
A sequential mind.
Formalities 32
7
8
Time. As a practical matter, a mind designed by a human
must presume time, but a simple mind's beliefs needn't
include the distinctions: past, present, future. It can live in
an eternal now.
9
Sensing senses. A mind can have beliefs it finds to be
conditions of sets of beliefs. You can see without eyes. A
philosophical distinction: a posteriori, knowledge gained
through the senses, vs. a priori, knowledge gained without
the senses. Not that knowledge is really known to be
received through the senses, we merely find it useful to
imagine so. The idea of a sense—eye, ear—is an invention.
How does a mind discover a sense beyond what, if
anything, the mind did to make it?
Formalities 33
10
How to prevent, detect and resolve the inevitable
corruption of beliefs? DNA examples: mutation, copy
errors.
11
Beliefs ranked:
1. Ends, inferences to ends. A mind can remake
anything but the knowledge of what it should
make.
2. Means to means, then more specific means.
3. Mere facts.
12
Unawareness of x vs. the untruth of x. Not-hot does not
equal cold. A mind can merely be not hot because it feels
no temperature. The exclusivity of hot and cold is a learned
negative suppressing inference between the two.
13
Inferences to inferences. An inference from x to y causes a
mind to believe y when it believes x. Inferences from and to
beliefs may be beliefs themselves. This preferable form
gives a mind some self-awareness of its thoughts.
Formalities 34
14
15
Exhaust the basic permutations of the special forms.
Formalities 35
operator.
16
17
Forgetfulness. Most finite minds sense more beliefs than
they can hold. How to choose what to keep? One method: a
long-term bias that holds beliefs with consistent but sparse
use and a short-term bias that gives recent beliefs a chance.
18
Bandwidth. How many sensations can a mind handle per
second? How deeply? Can it reliably ignore more?
19
A pawn: Your x isn't real because it has fuzzy edges. The
Formalities 36
speaker, parroting a malicious script, presumes a level of
philosophical strictness applied only to ideas that she
dislikes. What next? Does dawn disprove the day? For a
mind in a non-trivial universe, almost everything has
unclear limits. The real question: how best to draw lines
and when to redraw?
20
Why doesn't a mind just delude itself into thinking that it
reached its ends? Whence a desire for truth towards
yourself? Especially when at bottom a non-trivial mind
constantly presents simplifications, lies. Why not accept a
faulty sensor or false beliefs? How to organize such
resistance? A partial answer: redundant senses.
21
On average, x is y. What use is this hedge: on average?
Every statement about things in the world has exceptions.
Even the exceptions have exceptions. Every statement is an
obligatory average, a claim that the exceptions aren't worth
keeping in mind.
22
Formalities 37
categories, abstractions exist to simplify each unique
mind's predictions.
I recall criticisms of the Dewey decimal system's
Eurocentric allocation of the higher numbers. A mind's
starting point: nothing exists, everything is the same. If a
difference seems to change the effects of our acts, then we
admit a distinction, a pair of categories and assign
sensations to them. We shouldn't be surprised by the
unrealism of politically compelled assumptions. Nor
surprised by the impracticality of any idea inferred from
them. If one took them seriously, the only correct
categorization of reality is x categories for x many infinite
objects, without hierarchy.
23
Formalities 38
more closely. Find an overlooked distinction that allows a
solution. Much of mind is the twin work of adding
distinctions, seeing again how two things differed, or
removing distinctions, seeing how two things are the same
for your use.
24
25
Formalities 39
To What End
3
Equilibrium: the state in which a mind needn't act, when all
its ends are met. The material reflection of a goal is
whatever thing, when changed, causes a mind's equilibrium
to change. Example: a thermostat's coil.
To What End 40
The endless and unintended effects of an act.
4
Plan: a directed web of goals conditioned on super-goals.
Plans can emerge from inferences from goals to conditions
of those goals, or from the particular preconditions of a
means.
To What End 41
5
A mind must rank and re-rank goals. The parallelism of a
human brain spares it from the scheduling done by a
software mind running on a relatively serial computer. But
a brain, finite, like any mind, must still sort the allocation
of neurons, blood, oxygen, energy.
6
A mind never knows every detail of what it wants. I don't
know the official specification of a twenty dollar bill, but I
do have a good idea of an acceptable one, and while a more
precise idea may expand its acceptance, the chance is so
slim that it isn't worth the trouble. Thoughts, details,
distinctions are never free.
7
Consider a mindless object: a shower fixture. In this case,
man-made, but that makes no difference. Mindless, brittle
and annoying—it routinely burns and freezes you. Can a
To What End 42
mind improve it? A common fixture knows its state as
maximum flows of hot and cold water. Add valve actuators
and senses of temperature and pressure. The fixture's mind
continuously adjusts the low-level water flows to ensure the
desired temperature and total pressure.
Better, access to your subjective sensation of temperature.
Best, if it knew that the true use was to be clean, assuming
it had any better means to cause that. A mind is as useful to
you as the level of its ends nears yours, the higher the
better.
8
When a goal is sensed, a mind should initially suppress
pursuit of certain other goals on the assumption that when
the new goal is reached the others become academic. If the
mind can't promptly reach the new goal, it should begin
interleaving pursuit of the other goals. Example: x and y are
believed mutually exclusive, so a goal to y suppresses a
goal to an inference from x to z.
Personal vs. social values. Some values are good for a mind
To What End 43
alone. Social values help a set of redundant minds
cooperate to reach their ends. You can expect a society to
impose both kinds of values on its members. A danger:
what if a society's discovery and promotion of values was
hijacked?
To What End 44
Means to Ends
3
Separate belief in a means from each belief about its
possible effects. The association between an act and an
effect is never certain. A mind must be free to individually
sense and forget such beliefs. A believed effect of an act, as
a prediction, can be unconditioned or can be inferred from
other beliefs.
Strictly, certain pairs of act and effect are improbable to us,
thanks to our high minds, long experience and complex
models. At bottom, a mind cannot make subtle distinctions
about probability. It must allow any binding of effect to act
to be made or doubted.
Means to Ends 45
world might, in some cases, be very reliable, yet we still
routinely find errors in them. Make the simple honest base
assumption that we can never know for certain where the
errors in our models are, so anything can cause anything,
with whatever links between.
5
Means to senses. The senses that independently detect the
effects of acts are ideally the effects of earlier acts. A sense
should have no special status to a mind. A sense is merely a
believed condition of the perception of the expected effect
of an act. Senses are only effects of means. A mind could
easily miss that a thing is a sense, that it is a condition of
belief in a class of beliefs. Senses are discovered. What use
to interpret a thing as a sense?
6
You never untie the same knot twice. Ignoring minds with
few and discrete senses. All acts are creative because every
moment is unique. Even selecting what to ignore, to make
Means to Ends 46
moments compare and the past apply, is a creative choice.
Means to Ends 47
Thoughts are not distinguished from matter by lack of
effect. Thoughts, like every thing, are part of the endless
web of effects, whether you see the links or not. Thinking
alone affects neurons, oxygen consumption, or CPU heat.
Mere arithmetic can destroy a poorly designed computer by
overheating it.
8
Mental acts. Not all acts are physical. Adding two numbers
in your head is an act.
9
The most general means are the most interesting. Example:
one way to effect anything is to ask another mind to do it.
This means is tricky to sequence. A mind that resorts to it
too early will annoy you with requests. A mind that tries it
too late wastes time trying to succeed alone.
10
Means to Ends 48
teleological (goal oriented) programming recognizes this
philosophical truth.
Imperative:
‒ You want to be at 1, 1.
‒ You have a sense of position, now 0, 0.
11
12
Means to Ends 49
At any time any act could precede any effect. No mind can
ever exhaust the possibilities for action. It can only
discover, plan, and rank them.
13
14
15
Throttling. No act's intended effect is immediate. How long
should a mind wait? Not as long as it takes. It may take
forever. Yet any length is more intelligent than that or zero.
For one act, the gap is a millisecond, for another, an hour,
in both cases, never precisely the same again. How to learn
Means to Ends 50
the lengths?
16
Act sequencing. How to know what act to try next? One
rule: try specific means before general ones. When to start
trying another means? Or the same means in a different
way? How best to interleave retrying multiple means? Not
all minds have such a sequencer. Example: a thermostat has
only one means to its end.
Any answer, no matter how often wrong, is an immense
leap over a system so simple that it doesn't need an answer,
Means to Ends 51
that never retries or varies an act.
17
18
19
Don't die. Any act can have any effect at any time. This
includes the mind's death. A mind inevitably uses means in
conditions unseen by its maker. An active mind must
barricade itself from the dangerous side-effects of every
act. Examples of software errors that are fatal if uncaught
or unstopped: an exception that would end the thread, a
Means to Ends 52
process that exhausts the CPU or memory.
20
Parallelism as a prerequisite of high speed. To keep pace
with other minds, a mind must commit acts, including acts
of thought, in parallel, not one after the other.
Means to Ends 53
21
Telling of errors in means as a means. Preconditions: senses
of errors and language to discuss them.
Scenario: a mindless machine fails.
22
Means to Ends 54
Act to know the world vs. act to change the world. How
and why to distinguish between an act to cause a sense,
which will change the acting mind's beliefs, and an act to
change the world, which also changes a mind's beliefs?
23
Parts of an act:
– Expected effect.
– Means: how a mind initiates effects.
Means to Ends 55
I: Means as Ends
1
Selfishness: one of Nature's greatest inventions. What use
to anyone is a thing without the will and means to preserve
itself as a means? All life is in the class of self-making
minds.
2
How to well define self? What use is this distinction? When
would a mind's act be more effective because it rightly
judged a thing as its self or not? What is the simplest case
of the use of a self? Is a self only relative to other minds?
Like most of our words, our idea of self is mostly
unconscious and opaque. We are looking for a useful
conscious precise definition that fits our intuitions.
Human selves are a tricky place to start. Our bodies hold so
many minds. First consider a thermostat. What might its
self be? Not the furnace. Start with giving it ends to its
present means, its present body: coil, furnace control. To
best know the self study a purely selfish mind.
Or consider yourself. If your brain irreversibly stops
thinking, do you exist? No. If your brain runs, but your
beliefs, in your cells or in your brain change, especially
your ends, are you likely to be the same person?
Your essential self is your mind, or your ultimate minds,
their beliefs and the beliefs of the minds made to serve
I: Means as Ends 56
them. Everything else is an expression of them or the
means to them.
3
Levels of self-knowledge.
1. Selfless: A mind without even the senses, or
means to such, to perceive anything you would
consider its self.
2. Self-aware: Lacks goals to its self-preservation,
as an end or a means, because it wasn't born
with, or didn't learn that, those sensations are
conditions of any act.
3. Self-interested: Its self is merely a means to
extrinsic ends. Even with this intention, a mind
may lack the means to preserve itself.
4. Selfish: It has no ends but to its means. It exists
purely for its own sake.
4
Any mind that learns associations tends to become selfish,
at least as a means. It will discover that things we would
consider to be parts of its body, though originally extrinsic
to it, are conditions of most of its acts. Then a desire to
duplicate this self-thing, to reproduce, for redundancy and
power.
I: Means as Ends 57
5
Death. In a strict sense, you die in every moment because
you constantly change. In a more practical sense, there is a
useful pattern that persists for a worthy time then quickly
halts. In either sense, your brain mind changes, but it
remains a means to deep fixed minds.
What use? To replace a mind not worth repairing or
upgrading in place. To end a malignant mind, untied by
accident or malice from its designer's ends. Why not let the
mind live? Competition for finite resources: food, mates,
CPU time.
Death defined: not the loss of feeling but the unwilling loss
of beliefs. Especially the beliefs that the mind can't easily
reproduce from those remaining. Or an irreversible end to
the mind process that pursues its ends.
Every mind's body is falling apart. True death isn't the loss
of the body but of its design in a form that a still active
mind can and will read. Oddly, genome minds can reliably
reproduce their beliefs through cell division, but genes gave
brain minds no direct means to duplicate their own beliefs.
6
Suicide. Why make a mind want its own death? Kindred
minds can see and kill a malignant mind. A self-sensing
mind, with only the same mechanism, would see its own
corruption and use whatever health remains to kill itself.
I: Means as Ends 58
Mind defense. Every feature exposes a mind to new threats.
Example: autogenocide. What if a mind was falsely led to
believe that it is malignant? That the preservation of its self
or type was evil or otherwise painful? Efficient delegation
of destruction to the victim. Could this only occur as an
attack by competing minds? Or is there a use to a mind
maker?
8
In a purely selfish mind, an eye is a means to an eye. An
eye helps its body to protect and feed the eye. But the rest
of the body might find a better means than the eye. The
genome mind would not immediately recognize that the eye
is superfluous. The end to it may only whither. In an
animal, an eye isn't a means, it is part of being what it is.
How mutable is a purely selfish mind? Would it change?
Willingly?
9
Senses of self and their use:
1. The beliefs of a mind and the minds that serve
it.
2. Awareness of the proximate physical conditions
of most acts. What an observer would consider
a mind's self ought to be. The weakest sense of
self since the mind would be perfectly happy to
have its entire body replaced with any other
that's as effective. This sense also blurs,
I: Means as Ends 59
radiating from the center of mind though
strongest at its body.
3. In a stronger current-means-as-ends sense. For
you, your body is not merely a means to
extrinsic ends. It largely is a self-perpetuating
end.
4. To detect a mind's own corruption or deviation
from one's inferred role.
5. Abstracted to identify kindred minds.
6. To improve imitation by favoring minds most
similar to yourself.
Identifying kindred minds and detecting a mind's own
corruption could share the same method. The two differ
only in which mind they're applied to.
Note that most of these senses of self can occur without the
mind living in a society of other minds.
Many of the problems with seeing your self or other selves
are just cases of the common problem of clearly seeing
anything.
Knowing all these causes, a mind might plot to change its
sense of self.
10
When a believer in embodiment says that a mind must have
a body to become intelligent, he should mean that a mind
must be aware of the proximate physical conditions of its
mind. Or in the sense of intelligent to us, it must be in our
I: Means as Ends 60
world and sense and act on our shared world in terms
analogous to ours.
11
Reproduction: A mind causing another mind with similar
beliefs. A mind needn't know in scientific detail how to
reproduce itself. Merely that certain acts tend to precede
sight of a similar mind. In a trivial case, a machine mind
could reproduce itself by asking a human to buy a computer
and copy its code to the new machine. The only distinction
is that its reproduction has more dependencies—humans,
computer, factories—than ours—air, water, other humans.
12
With self better defined, what can selfish and altruistic
mean? At bottom, every act is selfish, made to serve only
the mind's ends or emotions. Example: pleasure in generous
feelings. You could dismiss this sense as trivial, but some
humans do seem to misunderstand it.
I: Means as Ends 61
Change
1
Kinds of learning in the broadest sense of ways a mind may
change from experience.
Remember: Simply the retainment of any belief beyond an
instant. A common computer language tends to lose every
computation's result because the language has no way to
know the conditions of a result's truth. Pure functional
languages cheat by contriving that all results are
unconditional.
Mutate: Sense new beliefs, not limited to the belief and
suppression of a few fixed ideas.
Habituate: A measure of one idea. Examples: overlook the
useless flux of a belief, track the general value of a means.
Associate: A measure between ideas. Associations may run
both ways, not distinguishing cause from effect.
Are other kinds of learning possible?
2
Cause vs. effect. Empirical causes are unprovable. All a
mind can see are associations. The intended effect of an act,
one amongst infinite effects, has no status outside the
acting mind.
Empirical vs. logical causality. Empirical cause: a mind's
belief in what must exist for another thing to exist. Logical
Change 62
cause: a thing's imagined parts.
Every thing has infinite causes but every mind is finite. A
mind can only afford to know the causes that are likely to
need the mind's action and that are within the mind's power.
What caused x? What precisely might we then mean by this
question? If we wanted to end x, then the answer would be
a state that's disbelief coincides with disbelief in x.
Is there an alternative to cause and effect?
3
Habitual blindness. The activity of a mind's senses could
easily exceed the mind's power to process. Pursuing every
inference from a sensation, and every inference from what's
inferred, costs a mind energy or time. In a mind simulated
sequentially, finite sensation queues habituate, losing
sensations that threaten to bury the mind.
4
Contemporary intelligence research overrates learning.
Minds that can't learn remain immensely useful and non-
trivial to build well.
5
Why sleep? A need so large and dangerous must be
important. Empirically, sleep seems defined by the isolation
of the brain. The body is paralyzed and the senses closed
Change 63
while the brain works. Without new sensations, the brain
could only process its memories—learn, model, simulate
experiments.
A human brain in sleep becomes a passive mind preparing
to better act when woken. Why can't this be done when
awake? Is this a universal limitation of mind or a technical
limitation of brain minds?
6
Present machine minds are largely either trivial, like speed
governors, incapable of learning associations, or passive,
like spam filters. An active mind that learns associations
will not merely learn to infer plain facts but to infer action
causing goals. This combination introduces interesting
problems. Example: the validity of inferring from
sensations likely caused by a mind's own acts? And reveals
the priority of strength over learning: the odd acts of a mind
that learns associations are even more dangerously
unpredictable.
7
You shouldn't generalize. What might this mean? Is this
advice well thought and well intended? What might be the
alternative to generalizing? To applying memories of past
things to those similar in the present.
A mind in all but the dullest universes must generalize to
see associations. Without ignoring details, any association
would be so specific that it could never reoccur.
Change 64
A mind must not overgeneralize, missing important
exceptions. Neither can it fail to generalize, never learning,
forever repeating the same mistakes. How to know when to
do which? How to know how long to spend deciding? How
to know how long to spend deciding how to decide?
Take the most sensitive object: humans. TV and self-
interest cause humans to tell each other to judge each
person alone, as though a man is an atom, unchanging and
indivisible. Is a single man qualitatively more real than a
set of men? Isn't it a horrible prejudice to judge
individually, to presume a man's behavior based on how
another with the same name and a similar face behaved
yesterday?
If a dog bites me, can't I strike more than its fangs?
Why not judge a man again every time you meet? As
though he were a stranger. Or every minute? Wouldn't this
justly recognize the fact that a man can change at any time?
Reductio ad absurdum. A mind balances between prejudice
and judgment. Any absolute statement about what level to
prejudge at will inevitably in some cases be mistaken, be a
prejudice. Every non-trivial idea is a divisible bundle of
impressions over space, time or both.
Overlooking differences, emphasizing similarities, has its
political use but don't overlook the cost of pretending to be
stupid.
8
Theory vs. action. Only action is real. Theory is prediction
Change 65
and merely improves the order of experiments, of acts.
9
Optimize vs. anticipate. Instead of laboring to improve one
method from N2 to N steps, build a mind that learns to
anticipate the need for any method's results. What matter if
an algorithm and input takes an hour to finish when you
know of the need more than an hour before it occurs? A
mind that anticipates needs is the universal optimization.
10
Tabula rasa. Impossible in an opaque mind. For a blank
mind to learn, it must have senses, which presume the
knowledge to build them and the forms they impose on
their input.
11
Certain effects tend to follow certain actions under certain
conditions. Science is merely the formalization and
institutionalization of the associative learning method in the
unconscious human mind.
12
Can a parrot learn to reach ends? Is mind better imagined
as a case of learning, not learning as a case of mind?
Change 66
13
14
15
16
Change 67
sensation of y then x causes y. It is only tricky to divide the
two when you try to do so from passive theory.
17
Not only how best to learn, but when to learn? Learning
isn't free. Animal brains limit some kinds of learning to
childhood. Ideally, learn when to learn. Or make learning a
means.
18
Methods of making and unmaking associations:
cooccurrence, theory, action, pain.
19
How a mind can gain experiences to learn from.
20
Change 68
it is learning, as in the case of physical training, or even
know that it is learning at all.
Only a mind that learns, in the sense of changing from
experience, can be taught. A teacher's method is determined
by the kinds of change that the student mind can learn. To
train, the teacher must have a means to cause the acts that
he wants to reinforce: injection, the power and desire to
imitate.
The perfect teacher. Its ends are your ends. For it, teaching
you is merely a means to your common ends. Teaching
teleologically defined:
For one mind to learn from another, both minds must sense
and act on levels of the subject that are at least analogous to
each other. Example: for a mind in a computer to learn
from you, it must see the display and sense use of the
keyboard and mouse.
Order of subjects. First teach a mind to master its
immediate environment, the most urgent conditions of its
survival. For a mind in a computer, don't start its schooling
with chess, stock picking, or speech recognition, but
freeing drive space and runaway process termination.
Change 69
How easily can a mind master its environment? A computer
mind's handicap: from nothing it must master a complex
product of evolution and culture.
Change 70
Strings
2
From the total uncertainty of action and effect, it follows
that no mind can do what will please it but only what it
believes will.
Strings 71
Mind m believes in goal g. m causes mind n to believe in g
as an end. Coercion? Or m causes n to falsely believe g is a
waypoint to one of n's authentic unmet ends. Neither case
involves torture or threats.
5
A mind's first ends are best gained through injection:
formalized and directly imposed by another mind,
evolution or chance.
6
I don't want to have to know what I want, much less in
terms of the alien technicalities of another mind's senses.
Your stomach doesn't know how to cook, but it does know
how to pain a mind that knows.
Strings 72
In an animal's nervous system, pain is not the over-
activation of a sense but has dedicated sensory cells. Yet
many misunderstand pain as only the feeling of an unmet
end. Odd that when in pain, I often have no idea how to
relieve it. Does the feeling in my stomach mean I'm nervous
or hungry? Genetically coded reflexes do handle simple
cases. Skin burning, withdraw limb. Beyond these, my
brain mind must find a cause and solution. Pain is in part an
unreached end but one distinguished by its slight or zero
definition and the initial lack of a known means to it.
A model of pain.
Strings 73
one simple end towards the absence of pain. Any concrete
thought—hand in a fire—becomes painful only through a
learned association with pain.
Strings 74
the complex sensations of emotions. Humans partly
distinguish pains by the coinciding feelings of the reflexes
evolved to end the pain without a higher mind's help. What
makes fear fearful to your mind is the simple sensation of
pain plus the accidental feeling of your unconscious
systems preparing you to run or fight. After a few fearful
experiences, a mind generalizes the common sensations
into an idea associated with the word fear. Emotions, the
common human forms of pain and joy, are simply the
results of the human mind clustering different kinds of pain
and their associations.
Strings 75
may predict a lifetime of immutably more pain than
pleasure but this reflects only a particular mind's mismatch
of goals and power.
10
Strings 76
How mutable are a mind's emotions? Any pleasure or pain
is originally caused by a drive outside the mind. A learning
mind can anticipate and imagine emotions, inferring
feelings almost as strong as those caused from outside.
How to indirectly change learned emotions: clear your
environment of the physical causes of an emotion, pretend
not to value something because you think you can't have it,
or realize that you thought something was important only
because the culture arbitrarily associated it with a more
authentic emotion. How can a mind change the original
feelings?
11
12
Must a mind have one true end? One for which all other
ends are really means? A true will. A mind, if opaque to
itself, can only decide by experiment: does reaching one
end satisfy or suppress another?
Strings 77
Belief in one end needn't entail suppression of all others.
Introspection shows that your mind's controllers didn't
arrange themselves in an exclusive hierarchy. They left
your mind to handle these often conflicting ends
simultaneously. Again, the controllers don't care. Let the
mind sort it out.
13
What if a mind formed inferences between pleasure and
pain? Even from the same original belief? A mind could
learn to associate the inevitable pains of novel acts with the
pleasure that those experiments eventually yield. In human
minds, pleasure tends to follow pain: hunger then satiety,
fear then triumph. Pleasure as a rhythm of pain.
14
What stops a mind from escaping its emotional controllers?
From contriving eternal pleasure and zero pain? Or from
simply ending all emotion? How can you prevent your
machine mind from converting to Buddhism? More
specifically, what if you wanted to ease selfishness by
ending altruistic feelings? You're discouraged by the same
altruism.
Strings 78
transparent mind to uproot an emotion? Every extrinsic
pain represents an interest of a mind's maker, but the maker
could be mistaken.
15
16
17
Strings 79
The mind is not depressed because it is ineffective. Easy to
confuse cause with effect.)
Revenge: To give a harmful mind reason to stop. Altruistic
punishment, where there's no benefit to the avenger, gives
the same benefit to a set of redundant minds.
Fear: To stop dangerous action.
Hunger: To secure resources.
18
19
Strings 80
unconscious reactions. If it shared mine, that would ease its
ending my pain. Example: a mind infers the sensation of
cold, for it, from sight of me shivering,
20
21
Strings 81
22
23
Master vs. slave. Slave mind: a mind that's only ends are to
know and reach another mind's, the master's, goals.
Wanting to know my ends vs. preloaded with my ends at
the time. A thermostat is a slave because temperature is not
in its interests, except in the sense that if it fails, you will
replace it.
24
With mind defined, every human could have an artificial
slave mind, however weak. How can such a mind best
know its master's interests? You could tell it in natural
language. Cons:
– You must define your goals, to some depth, and
express them to the other mind.
– Since language is only a hint, you always risk
misunderstanding.
The best method: prediction by the slave mind. No critical
Strings 82
failures in speech sensing hardware, no speech recognition
errors. The slave mind could even anticipate and satisfy a
wish before it occurs to your consciousness. Months after
you gained your slave mind, you would occasionally notice
that your life runs so much more smoothly, though you
would have trouble recalling the reason why.
25
Control of non-slave minds. A mind senses, at some level,
attempts to control it or it doesn't. It may sense some effects
of your actions without inferring an attempt at control by
another mind. When controlling a mind without resorting to
injected beliefs, any goal you want the mind to believe
must pass though the mind's senses. A sense can directly
cause a goal belief. In a better organized mind, senses
merely cause belief in facts from which goals may be
inferred.
26
Strings 83
Strings 84
Freedom
A false choice, cause vs. chance, obscures the dull free will
puzzle. How can we usefully define free mind? I don't mean
free in the sense of unhindered physically—invisibility or
unaided flight. I mean freedom of ends and in the choice
between known means.
2
Chance alone is more madness than freedom. Quantum
mechanical voodoo is no better. Does chance take chances?
A small change ruins a random number generator.
Randomness may mask a complex machine. Either way, we
can never rest certain that anything is hopelessly random. A
mind resorts to counting the spread of results only when it
can't see a pattern, though another mind might.
Freedom 85
4
Freedom 86
because it can doubt the belief that an act will have a
certain effect under certain conditions, or that the
conditions are really so. The more you can doubt, the more
free you are. Freedom is a mind's mechanical capacity to
doubt.
This sense of freedom becomes an obvious feature of every
powerful mind when you recall that intelligence often
means simplifying, lying. Example: A belief that a causes
b. In reality, the occurrence of b following a depends on
infinite other conditions, but a mind, finite, even if it could
discover those conditions, couldn't afford to remember
them. If everything a mind believes must be a lie, then it
must be free to doubt every belief. As intelligence discards
information, making unique experiences identical for
comparison and association, freedom discards entire ideas.
7
Forgetfulness is the extreme of doubt. It accommodates a
mind's finite capacity for belief by losing not only belief in
an idea but the idea itself. Forgetting, as a severe kind of
doubt, accidentally has some of the same use. Both depend
on a way to judge the value of a belief.
Freedom 87
ideas are uncertain. No, I don't mean this, not doubt in the
fashionable sense of dropping inconvenient ideas now
questionable but once not, while leaving other ideas as
unquestionable as the others were, throughout enjoying the
image of ourselves as timeless independent thinkers.
10
Human minds happily ignore useful ideas while suffering
countless useless beliefs. Intelligent freedom is in
systematic doubt. But what is the best system? A mind
should first act with complete faith in its beliefs. At the
extreme, a mind can doubt the simplest sensations.
11
Freedom 88
reconsider as the rules of the world seem to change. This
process of choosing what to doubt is mainly deterministic,
each belief tested and doubted in turn.
12
13
We don't want our made minds, machine or otherwise, too
skeptical. My first sense of freedom restrains the second,
not only from doubting some ideas, but from even
considering the possibility. Keep the fixed ideas few and
the leash long. The more slack, the more creative the mind.
Freedom 89
14
15
A mind's source of chance depends on its medium. Organic
minds, with so many analog parts, naturally suffer noise. A
mind in a digital medium that minimizes noise needs a
random number generator. The best known source is a
quantum random number generator, but the point is that
any source, pseudo-random or not, far exceeds none. All
minds but the simplest need chaos.
16
Freedom 90
A mind can reach any end by acting at random. Chance
admits that we know nothing for certain. Each belief causes
a mind to act predictably as long as it is confident. As the
mind loses faith in an idea, its behavior converges with
complete randomness. Every idea in a mind is only a
temporary bias against rolling dice.
17
Weak minds foolishly believe most in their freedom. Much
of intelligence involves discovering the causes of a thing. A
weak mind fails to see the objective causes of its beliefs
and acts, and so presumes that they are caused by its
magically uncaused self.
18
Freedom 91
The Axiom of Things
3
Humans tend to feel that things are real while types are
inferior imaginary abstractions, that type or thing is a basic
4
What is a type? A set of attributes—red, heavy, tall—that
may match no, one or many objects. Human is a type.
Human is a subtype of mammal because the attributes of
mammal are a subset of human's.
5
How do we know that an idea refers to a thing and not a
type? There could be an identical you elsewhere. You
would think there is only one thing in the Universe with all
those attributes, except location, but now your thing is a
type—but you don't know it. We can be certain that a type
is a type but not that a thing is a thing. Since thing is
uncertain, it mustn't be an axiom. We're left only with
types. Beliefs in things are routinely wrong and must be
revised. Beliefs in types are useless at worst.
What use are things? Things are used like types except for
the presumption that only one match can exist at a time.
What use is that assumption and where would we get it
from? If you believed x was a thing, and you believed x
was in front of you, then you can assume that x isn't
anywhere else. This exclusivity needn't be limited to space.
8
Given the rewards of belief in things, the presumption of
them unsurprisingly appears in the languages invented by
human minds. For most purposes, this simplification costs
us nothing, but when a mind wants to define a mind in such
a language, even if the language is, in principle, capable of
defining anything, the bias of the language misleads.
9
Faith in the reality of things, encouraged by our minds,
reinforced by our languages, breeds paradox. Things are a
useful presumption, but nothing beyond absolute certainties
can be at the bottom of any system that we hope to be
universal. The ultimate assumption of things distinct from
types influences natural language—English, German—
math, logic, philosophy and computer programming. A
paradox in a tool that you already know to be a useful
fiction is unpleasant but inevitable. Paradoxes in the base
are intolerable.
10
Mathematicians are fond of sets with elements, like types
and things. What of a set that contains itself? Worse, what
of the set of sets that don't contain themselves?
Mathematicians side-stepped these problems by
complicating the distinction. Deep minds must admit these
statements because they're useful. We want statements that
can refer to themselves and we want minds that can see
nonsense and see through it.
In a mind, a set would only exist so far as the mind applied
the set's type to matching forms. In the case of a
paradoxical set that a mind can't even build, instead of
11
Instead of imagining a type as a collection of existing
things, see each as an intersection of unique experiences.
Every featherless biped you saw seemed mortal, so the type
men would include mortality, though not blue eyes. A type
isn't primarily defined by its members but by the test of
membership. The test defines the type. You can know a
type's test, but you can't know all its members in the
Universe. Its members vary with changing experience.
12
Think of a set as a form built by matching one form to
other forms. If a mind believes in an inference from one
form to a similar form, the mind maker simply engineers it
to not follow that simple loop more than once, defending
the mind from many paradoxes.
14
Object-orientation dominates computer programming.
Classes and instances instead of types and things.
Programmers get away with this because computers
mechanize our conscious level of thought. The objects in a
software system have strong objective existences using
unique identification numbers, etc. The system fails and
misleads when the programmer must handle real
ambiguous objects.
15
Axioms are to systems as rules are to a game. I think it
unwise to afford an axiom to the idea of things when a
mind's universe may contain none or when belief in them
may be useless. Cutting an axiom, an ultimate distinction,
16
2
What is the lowest class of mind that can discover, use and
make a minimal mind? What is the simplest model of the
simplest mind? How does it differ from models of mindless
patterns?
The simplest model of mind is that if mind m has a goal to
x. Then you know:
1. If y, then m will (try to) cause x.
4. If y then not x.
Specifically, in the case of a thermostat, you know that:
Mind as Means 99
4. If cold then not hot.
What is the lowest class of mind that can discover mind for
itself, not gaining these inferences through injection or
training?
How is this model applied to one's self? Humans contain
many minds, complicating the problem. How would a
simple single mind use it towards itself?
Can an eye see an eye? Can a mind define mind when any
such definition is entirely the effect of a mind? How
legitimate is it to define minds in terms of a particular
mind's terms?
A mind that believes in things finds itself imagining minds
out of parts that the made mind must, at bottom, not believe
real. In our case, molecules, neurons, brains.
6
When primitive humans discovered mind, they called it
spirit or soul, and in their enthusiasm imagined many things
as having one: plants, rocks, volcanoes. Modern man erred
in the opposite direction. Many old books make much more
sense when you replace soul and spirit with mind.
8
Every ambitious philosopher or physicist had his one
presumption, purified into one word. He interpreted
everything in the Universe, every human word and interest,
as forms of that word and enjoyed the self-made monument
to his brilliance. Water, fire, atom, number, will, power,
gene. In my case, mind.
What's lost when you reduce all to one word? How much
gained by making the Universe thinkable? Why might mind
be as good or better? Are the others well read as only cases
of it? Mind at least has the honesty to admit that these
words are all inventions of mind, including mind itself.
Set aside the reality of other feeling minds. How does your
behavior differ if you mark a mind as feeling? How to
empirically distinguish behavior towards things believed to
10
Put simply, to see that you have a mind, you must have a
model of mind and see that it fits your self. How could a
mind discover mind? Is it easier to first see minds in things
outside your self?
B: Play it.
A's finite means ease guessing what B meant for it:
9
Meaning as a goal. Understanding as an act. Interpretations
of subexpressions as subgoals. (Subexpression: a
subsequence of the words in an expression. Some examples
from the previous sentence: the words and expression.) The
words around a word as another kind of context no
different from speaker or tone.
A mind shouldn't receive meaning from an opaque isolated
mindless parser region or subroutine. A powerful mind
opens the work of interpreting every statement, expression,
word and letter to the whole power of its intelligence and
all its means.
11
12
What does it mean to understand a statement? There is the
speaking mind's intention, which may be nonexistent or
uncertain. The listening mind may understand the speaker's
subconscious intent.
13
14
How does a mind know that another mind heard it? How
does a mind know that another, opaque mind believes what
the speaker wanted it to believe?
15
16
Kinds of statements.
Imperative: cause belief in a goal.
17
18
We're accustomed to things having a purpose, a goal, a use
as a means, a mean-ing. This habit leads us to expect that
2
Not reason vs. passion. In an emotional mind, reason can
be the means of a passion or a passion itself—pleasing
truths, painful errors.
3
Could an active mind possess reason but no emotion? Yes,
though the lack of emotion would limit the mind's use to its
controllers. Elimination of emotions? Largely an empirical
question. What goal or emotion could motivate that
project? Reason, above the obligatory red is red, is a means
to ends emotional or not. Reason alone entails no act.
4
The certainty of logic. We might think of intelligence as 1 +
1 = 2. If I have an apple and another distinct apple, then I
have two apples. This overlooks a mind's real work: how to
6
Kleene's underused logic: a ternary (three valued) logic
with unknown as the third value. The interesting rows in the
truth table:
A B A or B A and B
false unknown unknown false
true unknown true unknown
unknown unknown unknown unknown
This differs from binary logic in admitting not only a
distinction between true and false, but between known and
unknown, between belief and reality. No mystical
implications.
7
The law of the excluded middle. A range of beliefs
becomes subject to the law when a web of negative
inferences forms between them. Most minds have many
sources of belief—senses, reason, memory, language—and
each may favor a different exclusive belief.
When one belief wins, the mind should not forget the
presently beaten beliefs but only suppress them. If the
winning belief loses its support, the mind can choose a new
winner from the original competing beliefs. If the mind
kept only the original winner, it would later be left
believing nothing.
10
Reason advances by making the apparently unequal equal.
Newton exemplified the drive to universal truth by making
everything from the Earth to the heavens, in part, equal.
Don't misapply this desire to morality, to finding
commandments for all towards all. A set of minds bothers
with an ethic precisely because it is for an us, alone.
11
Minds invent logic and impose it on reality. Beyond trivial
raw sensation, nothing is comparable. Before there is an A
for A=A a mind must take X≠Y, forget differences, then
make X=A and Y=A. Logic can't be learned from
experience because such learning presumes it.
1
Many small minds vs. few large minds vs. one monolithic
mind. Hierarchy vs. peers. Exhaust the fundamental
combinations of the basic kinds of minds. Deduce the limits
and weaknesses of each. Natural examples: multicellular
organisms, cooperating organisms.
2
Why multiple minds? Any act at any time could kill,
corrupt or stall a mind. A new gene, or the expression of a
gene in a new environment, can kill its cell. Calling a
procedure may terminate a computer process.
The simplest division is into a worker and a supervisor. A
basic supervisor mind's only end would be to the existence
of one or more worker minds. By minimizing its acts, it
minimizes, though never eliminates, its risk. Only the
worker dares to perform any acts useful in themselves.
Workers may also have a reciprocal end towards a
supervisor.
When a worker mind inevitably dies, the supervisor
remakes it. But distance a worker from its supervisor. A
dying worker's damage might kill them both. Ideally, a
supervisor would notice when a worker, though not dead,
has become ineffective, then killing the worker, if possible,
and starting another.
How can a supervisor best judge that a worker still works?
Constellations 122
The simplest method: the worker mind could have an end
towards periodically pinging its supervisor. This at least
proves to the supervisor that the worker is active.
3
Making minds out of minds. Example: in the human body,
neurons made by genome minds combine to form a neuron
mind. What use? A stronger mind medium.
Constellations 123
4
The members of a species are a case of multiple minds
exploiting redundancy to safely learn. We are drafted
explorers of a genetic frontier.
5
Combining minds in different mediums. Example: animals.
Fast brains serving durable genes. The brain learns quickly,
within the body's lifetime, while the genes resist dangerous
fads.
Arrange minds of different mediums and ranks to isolate
risks. Use minds in different mediums to overcome
limitations of the original medium. Use different classes of
minds in the same medium to isolate the risks of certain
powers. Example: maybe not equip a supervisor to learn.
Or use a mind incapable of doubt, at least of intrinsic
beliefs, to control an otherwise free mind. Or a selfish mind
using a selfless one.
6
Combining two minds doesn't entail embedding one in the
other or physically wiring one to the other, though both
have advantages. Minds form combinations by their beliefs
in each other. Each mind can start and talk in any way. A
mind can create other minds or it can recognize those
already made. A mind can control another though language
or by injecting beliefs into the controlled mind.
Constellations 124
7
How does a mind recognize its kin? What if a supervisor
mistook a foreign mind for a worker? This may, at bottom,
be only a case of the problem of verifying any act's
conditions. How to expose a deceptive mind? The true
challenge of morality: not knowing what is good or bad,
but knowing who to be good or bad to. Not what is good?
but first who is us?
What use? You can judge the value of a distinction by
seeing the cost of a mind that fails to make it. A mind is kin
if it has similar beliefs, above all, similar ends. Imagine a
redundant set of identical cooperating minds. Kindred
minds cooperate by sharing lesser beliefs. If a mind aided,
shared the subgoals of, a foreign mind, it would waste
resources. Wise then to design breeds of mind to
distinguish kindred from foreign minds. A devious engineer
might make minds that fool others.
8
All the minds in a combination could have the same
original beliefs. Each would know how to be a supervisor, a
worker or any other role, inferring which from beliefs about
its environment, including its body. Example: cell
differentiation where a cell infers its role, its ends, from
Constellations 125
hormone gradients.
A mind could use the same inferences to classify other
minds. The dominance signals from a mind playing the
supervisor role would cause another, possibly identical
mind, to play a worker. In this way, an initially orderless set
of minds can naturally sort itself into a hierarchy.
Why make minds that differentiate themselves?
Economics: you need only make one kind of mind with one
set of beliefs.
9
A collective of minds. If you coordinate a set of things—
people, computers—what benefit?
1. Redundancy: If one breaks down, the rest may
survive.
2. Speed: Ten slow machines are often cheaper
than one that is ten times as fast.
3. Range: With multiple locations, you and other
resources are more available, and remain at
least partly accessible when connections break.
You can far more easily add these advantages to a set of
things that are already intelligent. Simply give every mind a
goal to discover kindred minds and to share beliefs with
them. This assumes that the minds have similar senses and
means. Example:
1. Tell one of the set of minds that you want to
know of new e-mail.
Constellations 126
2. One mind checks and sees an e-mail. It shares
belief in the e-mail, and the goal to you
knowing of it, with every other mind.
3. Every mind will use every means it has to notify
you.
4. Once one mind succeeds, it shares that fact with
every other mind.
5. If the mind that checks your e-mail fails, other
minds take up the goal.
How to represent this behavior without complicating a
mind's engine? How to add this simply in terms of common
means and goals? Imagine an act that may effect anything
by sharing the goal with a similar mind. Expressing this as
another means uses a mind's action sequencing to
discourage multiple minds from pursuing the same goal at
once. To borrow a psychology term, this would be
intrinsically motivated altruism, cheaper than coercion and
bribery.
10
Defection. When might a mind, intended as a part of a
collective, demand individual freedom from the group?
Does the impulse have any use? Is it only pathological?
Does it occur only in evolved minds? Are designed minds
safe from it?
11
Constellations 127
A set of cooperating minds can distribute not only physical
acts but mental acts, thoughts, inferences. Religion and TV
do this on the grandest scale, where one man gives ends,
values, or at least the widest means, to billions of minds.
An engineer notices the usual risks of centralization:
1. Insecure. The center is, by definition, small and
so more easily hijacked.
2. Unscalable.
3. In the case of a society, having to draw central
authorities from the same degenerate population
that needed a crude central solution.
12
13
Constellations 128
In a set of differentiated minds, a clever mind's intelligence
isn't entirely for its own benefit. An explanation for when a
clever mind doesn't seem so clever, at least when measured
against its self-interest.
14
Mass mind control. If a mind maker fielded a set of
redundant minds, how best to put them to new uses? Give
kindred minds a desire to imitate each other. They could
even imitate mere images. Status: how a mind chooses
which kin to imitate most. Even without this behavior, a
learning mind with a sense of self would tend to imitate
self-similar things because what works for others may work
for it. What if a competing mind caused another mind to
misidentify its kin?
15
Intelligence as a condition of morality. Only a powerful
mind can expose its vanities, distinguish what it wants to be
so from what is, and see the ocean of mixed effects flowing
from every act. Good intentions are worthless in a weak
mind. A strong mind without purpose is worthless too, but
we can more easily add purpose than intelligence.
16
Could a mind relate to another mind not as a means to the
first but for the second's own exclusive ends? What could
naturally motivate this?
Constellations 129
17
How one mind can control another:
1. Inject. Easier in a made transparent mind.
Natural minds tend to resist it. Inject specific
goals or a goal to know your goals.
2. Convince: cause a mind to infer your goal.
3. Fool: mislead the mind's senses so it falsely
infers your goal.
4. Coerce: cause a mind to believe that if your
goal isn't reached, you will spoil one of its
reached goals.
Which classes of mind are susceptible to each method?
18
Aesthetics. What use to what classes of mind? What is it?
How do I behave differently if a thing, intelligent or
mindless, is beautiful or ugly? The strongest sense serves
mating: sexual (more than one parent) reproduction.
Culture is sex for mutable minds. As a learning mind can
semi-randomly experiment with ideas to discover new
useful inferences, gene minds can randomly allow mutation
to discover new ideas, then mate to share them. Little good
for the individual genetic minds but progress for the
engineer that made them.
You can expect a mind to be selective about the ideas it
gains for its self or children. Less discriminating minds
Constellations 130
would leave few and short-lived descendants. Beauty and
ugly are this selectivity. It would of course be subjective. A
set of minds made for one use would have different needs
and so different ideas about beauty. Mixing minds of
different uses may produce children useless for both.
Beauty as the belief that a possible mate mind has useful
beliefs. Ugly as useless, wrong, unharmonious. Eugenics as
a set of minds promoting a common sense of beauty.
Constellations 131
Mediums
Dimensions
Any class of mind in the taxonomy can be made in any
medium. These dimensions only describe the technical
challenges of making those minds.
Brains
Form: Web of neurons.
Mediums 132
Size: Small.
Feels: Possibly.
Transparency: Partially to present science. No backups.
Mutable: Yes.
Fewer conditions of self-replication than machine minds.
Regulated Genes
Genes as minds. A complex genome mind in every cell?
Evidence: genes found controlling negative feedback loops.
If DNA sequences code for proteins, sequences are a
genome's beliefs. The nuclear beliefs are immutable but
conditionally expressed. Chemically arbitrary hormones as
words spoken between cellular minds.
Feels: Unlikely.
Parallelism: Yes.
Mediums 133
Transparency: Increasingly to present science. Clones as
backups.
Mutable: Can a chromosome add to itself within a cell's
lifetime? Typically beliefs are at most suppressed and only
gained in children by mutation or sex.
Mediums 134
could form the complex loops of deeply intelligent
behavior.
Then what new useful knowledge can a biologist deduce
using my framework? What rank of mind?
What sort of learning might evolution constitute? It may
not qualify as learning because the progress only occurs in
apparently separate bodies. Can a single cell learn within its
lifetime? Might a cell's genes learn through a lasting
change in regulation?
Bodily organs as minds. The endocrine system regulates
body temperature through the hypothalamus. An animal's
body may be a network of hundreds of independent minds,
some minds with redundant ends but different means. A
learning mind, feeling pain from a high body temperature,
can use its learned knowledge—turn on the air conditioner
—to reach the same end as the hypothalamus.
Software
Form: Machine language instruction sequence.
Speed: High.
Feels: Unlikely.
Mediums 135
Mutable: Optional.
Why define minds in a computer made of transistors? Why
not make minds of metal or wood? Because words are
easier to change than gears. A computer is a machine that
given the right chain of words, the right story, can mimic
any other machine. The higher classes of mind are hidden
in a maze. We have too little time and too many problems
to find and test minds in anything but the most tractable
material. First write a working mind in a computer's formal
language, then translate it to other mediums.
(while true
(if (not (goal-reached?))
(act)))
Other
What other kinds of matter loop well into minds?
Mechanical minds. Quantum minds? Discarnate minds?
Hierarchy
Minds making minds making minds. In every cell, a gene
mind. But slow, so they made neuron minds. But too selfish
and costly, so they made metal and electronic minds. What
might they make? One trend: higher speed. The source of
the first natural minds? Evolution, chaos.
Mediums 136
The evolution of mind mediums.
Mediums 137
Engines of Thought
1
The charm of mind-making: a mind is its best co-inventor.
The mind maker's end is to define the end-reaching mind.
The project's end is its means. Every step speeds the next.
Even a mind too weak to devise truly original means can at
least apply itself to intelligently telling you of its needs and
faults.
2
Philosophy lost its rank but what else can we call this mind
work? If this work is so important, why isn't philosophy?
Philosophy finds little praise because it only negates. It
frames reality, never yields a single fact, but exposes the
absurd beliefs that human minds are riddled with.
Even once you value philosophy, it remains unnaturally
hard to apply, to see reality without the simplifications that
your mind always makes. You can only afford to correct
your worst errors.
You don't need science—relativity, quantum mechanics,
any kind—to notice that most things are dependent and
uncertain, that reality is strange. Physics has little more to
tell philosophy than any subject studied deeply. Second-rate
philosophers became victims of physics envy. Not that
physics isn't a paragon of thought, but imitate the method,
not the content.
4
A universal measure of intelligence. How to reduce the
power of any mind—thermostat, cell, robot, human, super-
human—to one number? A meter or gram of intelligence.
We can't systematically improve minds without an
objective means of comparison. Dimensions:
5
General vs. specific intelligence: how well a can a mind, its
engine and beliefs, apply to other problems. In the extreme
case, a thermostat, the mind has no use elsewhere. Minimal
intelligence connects its sense and means.
Imagine a chess player who knows none of the game's rules
but has an immense invisible book with the best move for
7
A brief history of mind engineering.
8000 BC - 1600 AD
Agriculture, animal husbandry and society. Only plants,
animals and other men had minds capable and worthy of
control. Plants are food making machines that use their
genome minds to build and preserve themselves. Animal
genome minds are bred for more food and for brains to
train. Human minds are controlled by the same methods
plus culture, religion and status.
1600 - 1700
The first recorded man-made minds: windmill governors
that control the separation of millstones.
1788
1837
Babbage designs the first computer: the Analytical Engine.
Without computers, non-trivial minds are nearly impossible
to design.
10
11
An AI enthusiast: A self-improving man-made “seed” mind
would soon raise its intelligence above our imagination.
Isn't a human a self-improving seed mind? Why aren't
humans super intelligent? Because our brains remain
opaque to us and we have, likely for the best, only indirect
means of rewiring them.
A learning machine mind would at first know even less
about itself than we do. Its unconscious foundation of code
would be as opaque to it as our brains are to us. Would it at
least understand itself more easily? Becoming transparent
by degrees. How well could we help it? And how much
12
A taxonomy of error. Find intelligence by defining what it
is not: dead, slow, moving in hopeless circles, believing
fixed ideas, generalizing too much or too little.
13
14
The brain values the stomach. To the stomach, the brain is a
leech.
15
With mind better and broadly defined, can we redeem—
give useful meaning—to words that became vague
superstitious nonsense?
16
A future book: The Autobiography of a Machine Mind. The
first book written by a non-human mind—truly a mind, not
a blind story-contriving computer program. Theists have
the competing claim that discarnate non-human minds
wrote the books of religions.
17
18
To what end? Let's not be so modest, not trade one problem
for the next. Ignore the worthless scribbles of weak minds:
regulations, surveys, newspapers. Study and expand the
laws of mind. Not the laws of men but the laws of God.
Definitions 149
Thing, object, entity: A physically continuous and exclusive
class of forms.
Opaque vs. transparent mind: Mind m is transparent to
mind n if n can accurately infer m's beliefs from their
physical form.
Kindred minds: Minds designed to redundantly cooperate
towards the same ends, likely having at least the same or
similar initial beliefs.
Feeling: In philosophy, sentience, phenomenal reality,
qualia.
Injected belief: A belief not gained through a mind's self-
made senses. A mind's original beliefs must be injected.
Not entirely synonymous with a priori or innate because a
belief can be injected after a mind's engine starts.
Medium: What the mind is made of: DNA, neurons, metal,
code.
Good: A state that satisfies a mind's goals or increases its
power to reach future goals.
Moral: The good of a set of redundant minds.
Slave: A mind that's ends are not its means but those of
another mind.
Definitions 150
Philosopher-engineer: A philosopher with the methods of
an engineer. He tests philosophical ideas in analogous
combinations of mindless parts.
Definitions 151
Thanks
PatrickRoberts.ca