Sunteți pe pagina 1din 53

Production, consumption and

economic calculation
hayekian
Follow
Apr 15, 2017 · 7 min read

[This brief article is taken from Economics for Software


Professionals: A strategy for civilization and also explains
the Mises/Hayek argument for the impossibility of
economic calculation under Socialist/Communist regimes]

The wealth that life/order needs can be attained by either


force/theft/violence which diminishes the wealth/life of
others and leads to life/order destroying retaliation, or by
production and trade, which leads to the mutual benefit of
both orders. Peace, in other words, the absence of force and
violence as a means to acquire wealth, is what motivates and
forces every mind/order to think about the needs of others
in order to produce something of value and successfully
engage in trade, inadvertently leading to the aforementioned
division of knowledge and labor that makes civilization
possible. As the great Ludwig von Mises reminds us:
“Society has arisen out of the works of peace; the essence of
society is peacemaking. Peace and not war is the father of all
things.” As should become increasingly clear, peace and
the freedom from coercion/violence it implies is what “turns
on” the market process and is the key to socioeconomic
prosperity. Most living things are not smart enough to
engage in trade so they are left with the much simpler
strategy of violence/predation. A strategy which was
increasingly important the farther back we go in our
evolution which helps further explain why the potential for
violence and war is still so prevalent amongst us.
Since predation/theft is generally outlawed and peace
encouraged, social orders (individuals or companies) are in
cycles of production, trade, and consumption. If you are a
freelancer you produce a product/service and trade it
directly with society (customers) for money, and then trade
the money back with society for the wealth you consume. If
you work for a company, you produce your labor and trade it
for money with your “employer” who combines it with the
labor of others to produce a product/service which is then
traded with society for the money from which your paycheck
comes. Whether you are a freelancer, employee, or
company, what is commonly referred to as sales revenue
(your paycheck), is an estimate of the total amount of
wealth produced. Costs, like employee wages which will be
used by them to consume wealth, are an estimate of how
much wealth is consumed from the economic pie. And
profits, which are the difference between sales revenue
(production) and costs (consumption) are an estimate of by
how much additional wealth the economic pie has grown.
A profitable order is an order (cell/person/company) that
produces more than it consumes and is therefore self-
sustaining/alive. The global economy or ‘Social Organism’ is
really a vast collection of orders that are constantly trading
with each other, each trade taking each participant/order
from an inferior to a superior state of well-being from its
perspective, otherwise the trade would not occur. When Carl
trades a dollar for a hamburger he values the hamburger
more than the dollar and the restaurant values the dollar
more than the hamburger so the action of trading takes
place, which like all action which is not coerced, takes each
participant from an inferior to superior state of well-being.

Each entrepreneur/businessman/order is like a computer


that is constantly using prices to acquire wealth or ‘factors of
production’ and calculating how to reorder/transform them
in the most profitable/wealth-increasing way. For example,
a restaurant owner in Miami Beach(MB) on Nov. 13th 2016
sells a traditional Cuban dish called Picadillo consisting of
ground beef, rice, plantains and black beans for $8. The very
existence of this business/order and the $8/meal price gives
us a tremendous amount of information. We know that the
costs per meal, in other words, the total amount
of consumption of wealth needed to produce each meal has
to be at most $8/meal, otherwise the business would be
losing money and eventually cease to exist. Some of the
$8/meal, perhaps $1, might be profit, and $7 will be spent
on costs or ‘factors of production’ like labor, real estate,
equipment, energy, food, etc. We also know that there are
enough customers nearby willing to trade with and thus
sustain the restaurant at the $8/meal price. If the
businessman sets prices too high, customers will choose
other options, if prices are set too low, they might not cover
costs and cause the business to consume more than it
produces and thus go out of business.

Let’s assume there is another restaurant that sells a similar


Picadillo dish in Corpus Christi, Texas for $6.50 and also
makes a $1/meal profit. How could this be? The most likely
reason is due to the crucial fact that costs are highly time
and place specific, and in this particular scenario such
costs in Corpus Christi are lower than in MB. Perhaps the
proximity with Mexico means that many Mexican
immigrants who might be willing to work for less, and thus
consume less, can be employed. Rent/real estate is also
cheaper in Corpus Christi than in trendy MB. Texas also has
many oil refineries and the price/cost of energy might be
lower than in MB where gasoline has to be transported
hundreds of miles by truckers whose consumption must be
taken into account. The bottom line is that it is vitally
important to realize that the knowledge and costs associated
with creating a profitable order, things like real estate, labor,
energy, trust relationships and countless other factors, are
highly time and place specific, and only those minds
managing their respective businesses/orders in their corner
of the world at a particular time are in a position to acquire
such knowledge and properly set prices that will lead to a
profitable order. Would it make sense to copy the
$6.50/meal price that can sustain the business in Corpus
Christy and make it the price of meals in MB? Of course not.
We already know that given the abilities and knowledge of
the businessman in MB, his costs were $7/meal, so setting
the price to $6.50 would simply lead to loses, in other
words, more wealth consumption than production to the
tune of 50 cents per meal sold. This unavoidable time and
place specificity of knowledge, and the fact that only the
businessmen running their respective enterprises are in a
position to acquire such knowledge is one of the main
reasons why economic planning MUST BE
DECENTRALIZED, thus rendering central planning
ideologies like Socialism/Communism completely
unworkable. No central planning bureaucracy could possibly
acquire all the time and place specific knowledge needed to
properly organize a business/order and set prices which
properly account for local costs and customer desires and
lead to a profitable and thus sustainable order. Nikita
Khrushchev, who followed Stalin as head of the centrally
planned (Socialist/Communist) Soviet Union, is credited
with saying “When all the world is socialist, Switzerland
will have to remain capitalist, so that it can tell us the price
of everything”. The same point was made by the great
economist Murray N. Rothbard in a talk[i] where he
mentioned that “A noted British economist visited Poland in
the 1950s …and the Polish communist economist admitted
that they refer to the world market [for prices]” And
according to Murray the British mentioned to the Pole “If
socialism takes over the whole world which you are
presumably in favor of, what would you do then? [to look for
prices]” and the Pole replies “We’ll have to cross that bridge
when we come to it” Unfortunately for Khrushchev and the
billions who suffered economic chaos and an inevitable
decline in production under Socialist/Communist regimes
all over the world, prices in Switzerland (or anywhere else)
embody information about the costs of those particular
places at specific times and are no good elsewhere. This
would be like setting the MB price of Picadillo dishes be the
Corpus Christy price. Governments can always attempt to
tax more or redistribute wealth to keep unprofitable
enterprises/orders going, but this only leads to a continued
shrinking of the economic pie and lower standards of living
for all which is the inevitable hallmark of all centrally
planned Socialist/Communist economies.

The rise of the Internet and supercomputers is causing the


economically ignorant to believe that now central economic
planning will work and perhaps some bureaucrats will fall
for it and propose government force to get people to go
along with yet another disastrous socialist experiment. For
example, with the Internet local pricing information all over
the world can perhaps be processed by central computers.
Although this can help some entrepreneurs in numerous
ways it will not out-compete freedom and decentralization
as far as economy-wide planning because no
computer/system can get in the brains of entrepreneurs to
predict what products/businesses they will create and thus
alter society, and similarly no computer can get in the mind
of consumers and predict how they will choose to spend
their money thus once again altering the social order’s
numerous cycles of production and consumption. For
example, Dave in Seattle invents a drug that cures cancer
which causes him to borrow billions to bring it to market
and then causes billions of people to sustain his
business/order at the expense of others. How can a central
planning computer get in Dave’s head to come up with or
predict such and invention and also predict people’s desire
for the drug at the expense of other alternatives? Again,
impossible. Yes, Artificial Intelligence and countless other
innovations will greatly help mankind, but not bring about
Socialism in the foreseeable future.

Socialist/Communist countries and government in general


also face an ‘incentive problem’. In free societies or the
private sector in general, each mind/entrepreneur is
incentivized to be as productive as possible and keep
inefficiencies to a minimum since he owns/keeps the
additional wealth or losses. On the other hand, the
government employee or bureaucrat gets the same pay
whether his department did a good job or not, and is also
not risking his own wealth since that comes from the
taxpayers/society. So 1) the impossibility of economic
calculation under Socialism/Communism, coupled with 2)
the unproductive/wasteful incentives in such
regimes/government should help one understand why
Socialist/Communist regimes were/are always in
socioeconomic chaos

AI will not make Socialism


possible
hayekian
Follow
Apr 20, 2017 · 2 min read
The rise of the Internet and supercomputers is causing the
economically ignorant to believe that now central economic
planning will work and perhaps some bureaucrats will fall
for it and propose government force to get people to go
along with yet another disastrous socialist experiment. For
example, with the Internet local pricing information all over
the world can perhaps be processed by central computers
leading to certain improvements in central planning that
might have been impossible before. Although this can help
some entrepreneurs in numerous ways it will not out-
compete freedom and decentralization as far as economy-
wide planning because no computer/system can get in the
brains of entrepreneurs to predict what products/businesses
they will create and thus alter society, and similarly no
computer can get in the mind of consumers and predict how
they will choose to spend their money thus once again
altering the social order’s numerous cycles of production
and consumption. For example, Dave in Seattle invents a
drug that cures cancer which causes him to borrow billions
to bring it to market and then causes billions of people to
sustain his business/order at the expense of others. How can
a central planning computer get in Dave’s head to come up
with or predict such an invention and also predict people’s
desire for the drug at the expense of other alternatives?
Impossible. Yes, Artificial Intelligence and countless other
innovations will greatly help mankind, but not bring about
Socialism. Even the AIs will need freedom to compete with
each other in order to spread the adoption of superior AI
algorithms, etc. Central planning, regardless of how it can be
improved, will always be inferior to decentralization and
thus freedom.
For a more complete understanding of the impossibility of
socialism see “Production, consumption and economic
calculation” (7 mins), where the above paragraph was taken

Will an AI Ever Be Able To Centrally Plan an


Economy?
by   Alex Tabarrok March 13, 2018 at 7:21 am in 

 Economics

Every improvement in computing power and artificial intelligence raises anew the claim
and, for some, the hope that now we can centrally plan the economy. I was asked at
Quora whether this will ever happen.

I will begin by accepting that there is nothing inherently impossible about an AI running an
economy so, for the sake of argument, let’s say it could be possible using today’s
computing power to run a small economy in say 1800. Nevertheless, I assert that an AI will
never be intelligent enough to perfectly organize a modern economy. Why?

The main reason is that AIs will themselves be part of the economy. Firms and individuals
use AIs to make decisions. Thus, any AI has to take into account the decisions of other AIs.
But no AI is going to be so far advanced beyond other AIs that this will be possible. In
other words, as AIs increase in power so does the complexity of the economy.

The problem of perfectly organizing an economy does not become easier with greater
computing power precisely because greater computing power also makes the economy more
complex.
Hat tip: Don Lavoie.

  122 Comments

 Facebook
 Twitter
 RSS Feed
  print

Comments

clockwork_prior
March 13, 2018 at 7:42 am  Hide Replies1
Wait, you are giving a 'Hat tip: Don Lavoie.' to your own answer? One assumes you did
write and post it, right?

Or does someone with 241 answers at Quora use other people to find interesting
questions, people thus worth noting as those who found something worthy of your
attention?

Baphomet
March 13, 2018 at 9:04 am  Hide Replies2
Don Lavoie died many years ago. I assume he means that his response is what Lavoie
might have said.

clockwork_prior
March 13, 2018 at 9:33 am  Hide Replies3
Fair enough - I wondered about the lack of link, to be honest.

CD
March 13, 2018 at 11:51 am  Hide Replies4
Yes. It was a nice gesture. Lavoie's work is worth reading.

Dan
March 13, 2018 at 7:43 am  Hide Replies5
If the AI is good at figuring out the behavior of humans wouldn't it be good at figuring
out behavior of other AIs? Or are you saying there's a sort of AI arms race between the
private AIs and public central planner AI, where the private AIs trick the govt AI?

John
March 13, 2018 at 7:54 am  Hide Replies6
I had the same question.

Surely a central planning AI should be able to adapt at a macro level to the output of
individual AI's, even without understanding the precise calculus involved in their
output. Likewise, it wouldn't have to understand the precise motivations and capabilities
of every human in the system either. In theory, a centrally planing AI should be able to
monitor negative externalizes and unintended consequences in order to adapt to those
trying to "game it". Nevertheless, I'd be worried about the "paperclip maximizer"
problem. Defining the parameters that the central planner is to optimize is probably an
extremely complex problem.

So Much For Subtlety


March 13, 2018 at 8:40 am  Hide Replies7
If the AI is good at figuring out the behavior of humans wouldn’t it be good at figuring
out behavior of other AIs?
Actually planning an economy is so hard that maybe it will take the easier route. The AI
can figure out your behavior before time. So if you are not happy with your allocation -
or if at some time in the future you will become unhappy with your allocation - it will
torture you and/or your simulation for eternity.

Therefore, I would suggest, everyone would be pleased with whatever they got and no
queues would form anywhere.

Troll Me
March 13, 2018 at 10:32 am  Hide Replies8
If only there was someway to remotely apply an equivalent of electroshock ... to help us
learn that what we always really wanted was an absence of queues, which the AI
graciously delivered.

Dick the Butcher


March 13, 2018 at 11:00 am  Hide Replies9
I can't figure out my behavior. I sold a (buy-on-the-dip on 2 February) trade yesterday. I
guaranty the market will skyrocket.

How can AI figure the behaviors of a hundred million market participants?

We need to take an (I think) Aristotelian view of knowledge, i.e., the more one knows
the more one needs to learn. No human mind(s) or IA can possibly know all.

Anyhow, near-total subjugation to unelected mandarins has not successfully centrally-


planned the economy. But, don't stop believing.

Pshrnk
March 13, 2018 at 10:26 am  Hide Replies8
If the AI is the Central Planner, then the other AIs will be subordinate to it and must
obey.

Troll Me
March 13, 2018 at 10:33 am  Hide Replies9
I don't see why one piece of hardware operating based on data-trained rules should
necessarily bend over for a central planner AI just because it's making some big
"decisions".

Mark Thorson
March 13, 2018 at 5:42 pm  Hide Replies9
Yes, exactly. It doesn't matter whether the adversary is another AI, a human, an AI-
assisted human, or a human-assisted AI. Could a hostile AI game the system? No more
than people today game free markets. Sure, maybe some savvy traders (black
marketeers) can carve off a little around the edges, but the basic notion is sound if
properly implemented.
Hopaulius
March 14, 2018 at 9:37 am  Hide Replies10
Wouldn't a perfect central planner have to dictate every decision from raw materials
acquisition to delivered product? If so, there would be no room for human-like
spontaneity or even desire. The end user would get the product provided to them by the
system. Because humans would likely resist this, they would be eliminated. The real
question is whether AI itself will develop the sorts of human-like traits that prompt us to
worry about AI, in particular desire for power.

Bill Nichols
March 14, 2018 at 11:19 am  Hide Replies11
That's the wrong questions. The problem is more than technical, it's one of value based
trade-offs. Responding to other AI defeats the purpose of responding to individual
human preferences.

Alistair
March 13, 2018 at 7:51 am  Hide Replies15
This is just bad logic. Bad, bad, logic.

From the POV of a central planner, there's absolutely no functional difference between
a subordinate firm "using an sufficiently advanced AI" to make a decision and "using an
inscrutable person". Indeed, it seems easier; at least the AI might declare it's preference
function coherently!

The whole error turns on the idea that something complicated/inscrutable cannot be


modelled accurately/reliably at a lower level of resolution. But that's silly; we make
and use such models all the time. It's the foundation of economics, for a start. So long
as the modelled sub-AI's have certain properties that pull their choices to convergence
(at least most of the time), the central planner doesn't need to be "inside their heads". At
least no more than it needs to be inside a human head.

I don't need to be as smart as you to figure out what you would like for lunch. Our
future planner AI might happily model the decisions of millions of AI's of equivalent
power to itself, at a lower (but sufficient) level of resolution. It doesn't need to
encompass them entirely in its calculations.

There, now I've just been forced to defend central planning. I feel dirty.

John
March 13, 2018 at 8:11 am  Hide Replies16
+1

dan1111
March 13, 2018 at 8:16 am  Hide Replies17
+1
Nigel
March 13, 2018 at 9:36 am  Hide Replies18
Seems a reasonable point - though Tyler did set up an unrealistically demanding test -
"The problem of perfectly organizing an economy...".

Doug
March 13, 2018 at 9:42 am  Hide Replies19
This is a good point, but the reasoning only works if all AIs have perfectly aligned
interest. Otherwise there's some adversarial aspect to the relationship and a
principal/agent problem. Auto-Alice can very likely model a Bob-Bot's high-level
behavior with reasonable accuracy. But only if Bob cannot engage in deception or
concealment. Even the remote possibility of Bob being dishonest, blows up the size of
the model space. That likely makes it computationally intractable to clearly ascertain
Bob's motives, even given his behavior.

Think of being in a very untrustworthy society. Modeling people's behavior becomes


way harder. Sure they're acting nice and forthright, but it's likely they have some
ulterior motive. Obviously they want to conceal this motive from you, so they're
modeling your detective abilities. In turn you must model their counter-detective
abilities. And so on. In this type of scenario the arms race argument does hold.

Now, we can't really say much about how super-intelligent AIs will behave. At human
level and quality intelligence, deception seems at least as hard as counter-deception.
Maybe given the capabilities of future AIs this might not be the case. E.g. it may be
easy for one AI to audit the code of another AI. However current cryptography results
suggest that deception and concealment is computationally easier than detection.

Troll Me
March 13, 2018 at 10:40 am  Hide Replies20
Maybe there's some way that Bob-Bot can be temporarily switched off if deviating from
options A or B, rather than worrying about Bob ever getting creative in deviating from
the plan or having motives other than those aligned with outcomes of A or B.

What a lovely future that would be ...

Alistair
March 13, 2018 at 1:22 pm  Hide Replies21
Doug,

Yes, assuming perfectly aligned interest certainly makes it easy but....you know; your
post made me think about this in more detail and I'm not sure strategic behaviour is a
problem; bear with me;

I was taking this as the classic "Central planner" problem; the job of our central AI is to
maximise social welfare. But let's assume that local AI's are selfish and engage in
strategic behaviour to maximise their own welfare. Does it matter that the Central
Planner AI can't model this behaviour at a high enough level of resolution?
No. It doesn't matter that local AI's can find a High-Res "selfish" solution for
themselves that the Central AI can't reliably find with it's Low-Res models of
them.. The Central AI isn't trying to model those local AI "selfish" behaviour
solutions. The Central AI is trying to model local AI's "altruistic/optimal"
solution , as it strives to optimise social welfare overall.

That is a much easier* problem. The local AI's just have to shut up and do what they are
told. Now, there might be way for them to game the system by
controlling information flow to the central planner, but I think that's separate to the
original intention.

So, respectfully, (especially as I love social game theory wrinkles) I think the "can't
model dishonest subordinate AI" objection doesn't hold here.

*Easier = still laughably difficult, but hey, it's a thought experiment

sine causa
March 13, 2018 at 4:07 pm  Hide Replies22
This problem of duplicitous agents also exist for the market. The market doesn’t
respond instantaneously to supply and demand. A bunch of people can get together and
increase spuriously their demand of say tomatoes and create some shortage. When the
market responds by supplying extra tomatoes, they stop buying and you have a
mismatch. Is that really a big problem ?

Alistair
March 14, 2018 at 12:47 am  Hide Replies23
Yeah...kinda hard to see how the subordinate AI's can game the system without ruining
themselves. Speculation and hoarding?

sine causa
March 13, 2018 at 5:41 pm  Hide Replies22
Why are we assuming deception is prevalent ? When I go to the supermarket, I don’t
notice a lot of people trying to deceive me about their purchases. They mostly buy what
they want or what they see other people want ( like relyIng in good reviews at
Amazon ).

If AI malevolence is really prevalent we have bigger problems like tampering with self
driving cars or the nuclear arsenal.

Troll Me
March 13, 2018 at 6:56 pm  Hide Replies23
They could try to insist that they are willing to pay no more than $10 for something,
whereas when push comes to shove they may be willing to pay $20 per unit.

A sufficiently informed AI could fully milk consumer surplus and labour surplus to the
max, as compared to the situation where a consumer may nevertheless spend their entire
(lifetime) budget but enjoy a significant surplus.
sine causa
March 13, 2018 at 7:17 pm  Hide Replies24
"They could try to insist that they are willing to pay no more than $10 for something,
whereas when push comes to shove they may be willing to pay $20 per unit"

People do this often It's called negotiating

Troll Me
March 13, 2018 at 7:46 pm  Hide Replies25
Yes, but I refer to the case where the maximum willingness to pay is always known and
is fully exploited, alongside labour market orchestrations amounting to the same.

Alistair
March 14, 2018 at 12:50 am  Hide Replies26
Aren't we assuming our Central Planner has sufficient information on preference
functions?

I just say this because its really a separate problem from the one here which is all about
feasible computability of solutions with incompletely-modelled agents.

Ryan
March 13, 2018 at 9:48 am  Hide Replies20
I think you have to assume that the sub-AIs might be trying to game the system in turn.

Its why you can say predict the motion of a thrown ball, but not the motion of the stock
market. In fact its arguable that the application of too much AI to the stock market will
make it fragile.

Sure
March 13, 2018 at 12:31 pm  Hide Replies21
Why would the AIs "pull their choices to convergence"?

There are many non-converging Nash-equilibria for most non-zero sum games. It is
trivial to design payout matrices that result in multiple stable choices that do not
converge. If it is equally successful (at a low resolution look) for a sub-AI to choose
strategy A, B, or C; the central AI cannot predict which of those will actually dominate.
At best it makes the choice that they will be evenly distributed; but that runs the very
real risk that there are significant differences between A, B, and C below the resolution
limit of the central AI.

I mean take a very simple example - suppose you want a vacation to Tahiti. You can
choose to signal this desire very early (to get first dibs), late (to "pay" the diminished
price if there is excess supply of time-on-Tahitian-beach), or mid-way (to opt for some
trade-off). So my AI does a bunch of calcs and says that revealing my preference for a
Tahitian vacation should optimally follow a wait approach as does every other AI.
Suddenly the Central AI is getting a bunch of bad signals.
None of this, mind you, requires active gaming of the central AI. Suppose one agent
finds that they can net a greater surplus of whatever is of value if they adopt the
strategies most heavily discounted by the central AI. This makes the calculation space
simply explode.

Ultimately, I think the best example is board gaming. Take a nice Euro game like Puerto
Rico. In Puerto RIco there are 7 basic decisions for each player, within each of those
there are a variety of sub-decision that generally are in the 0-20 magnitude; each
decision and subsequent sub-decisions reduce the number of options for each player in a
round; exactly one decision has a dice roll (4 - 7 sided depending on the player count).
Rounds refill the set of decisions and play continues until someone achieves victory
conditions. A single turn in Puerto Rico has less complexity than most two-ply chess
setups in the mid-game. Everyone's strategic choices are very easy to coarsely model,
yet coarse models are not useful for humans playing. For Puerto Rico AIs, which cannot
yet beat top humans, I know of no coarse model algorithms that beat fine modeling AIs.

I suspect that given our experience with current AIs in toy economies like Puerto Rico,
we will not see convergence in with more complicated economies. After all, part of the
utility of increased computational capability is the ability to exploit differences between
fine and coarse grained analysis.

The set of "games" that all converge in the real world is much smaller than the set of
"games" in the real world. It is fiendishly harder to model when you have to use Nash
instead of Newton. And that is not even the worst set of scenarios.

Alistair
March 13, 2018 at 1:26 pm  Hide Replies22
These are excellent objections. But I don't think they apply here. The central planner
isn't trying to model local AI's strategic choice with a low-resolution model, it's trying
to model their socially optimal choice.

I agree that trying to model the strategic choice of peer AI's would be perhaps
impossible, along the lines of the original post's contention.

Sure
March 13, 2018 at 10:13 pm  Hide Replies23
I fail to see the difference in the distinction.

Say I have two very simple strategies for a sub-AI. I can try to maximize my social
standing with honest signalling or I can optimize my social standing with false
signalling. Say it is something simple like using some algorithm to subtly inflate my
resource allocation by reporting values at just the level to "round" correctly for the low-
resolution model of the central AI. True signalling is very much like playing Cooperate
in the prisoner's dilemma, society as a whole gets the most benefit if we all pick it. False
signalling is best for me and my sub-AI and analogous to playing Defect. For a lot of
payout matrices, this results in us getting iterated prisoner's dilemma. Now the central
AI has to figure out who is playing pure cooperation, pure defection, and some
massively multi-player version of defect (e.g. I defect next round with a probability of #
previous defectors / #sub-AIs).
Even assuming that we avoid cooperative action traps, we have a lot of cases in the
current world where actions are very close to socially identical, but not quite. For
instance, suppose a sub-AI can either build a product from Copper or Silver. To the
central AI this choice might represent no visible change (e.g. it can only model down 1
utile of precision). In contrast, the AIs on the ground can see that if they are allocated
Copper they can make their widget and spare a tiny bit of copper this has a non-zero
utility (say .5 utiles) as it can be stored for the future when there might be a copper
shortage. So the central AI plans on the assumption that roughly half of these widget
making AIs will want copper and half will want silver. Instead every single AI tries to
allocate for copper. The central AI then rations the copper and all the sub-AIs then ask
for Silver. This leads to a feedback problem which means the central AI now has to start
anticipating the effects of its own and the other AIs adaptations to something it cannot
see.

From each participant's perspective they are doing the socially optimal choice. The
problem is that the central AI cannot see which side of a break-point the socially
optimal course lies. The sub-AIs can try to communicate to the central AI why they
prefer copper to silver and by how much ... but that will devolve into the more fine-
grained sub-AI analysis just rolling into becoming a bigger AI or decentralizing the
economy. The former is the exact problem coarse-grain analysis was supposed to solve
and the latter is by definition not centrally planned.

Convergence is a luxury that only occurs in certain sorts of systems. Price signalling is
what humans, even in communist countries, have used to coordinate not because of
some historical artifact, but because it is a signal that allows for local actors to reveal
the true value of inputs in an efficient manner. Doing a bunch of central-AI analysis to
get right back to this sort of signalling is not going to add all that much.

Alistair
March 14, 2018 at 12:31 am  Hide Replies24
Sure,

Thank you for taking the time out for a considered reply. This is all good stuff, and
relevant to my above speculation that Sub-AI's might be able to maximise their local
welfare by providing false info to the Central AI. I figured out something almost
identical to above, so we are agreed as far as that model goes.

Let me turn to your longer example to discuss how I think we are talking at cross
purposes. The local AI's don't "choose" silver or copper. Short of potentially
systemically manipulating information (as above), they have no meaningful influence in
what they get at all. The central AI simply says "Here is Silver....my production
function says you can turn it into Y widgets. Make me y widgets".

Now, the central AI production function may be wrong; it doesn't realise that the Local
AI's can scavenge material and make a local surplus for themselves after producing Y
widgets. But that isn't per se a proof against the feasibility of the central planner; only
its efficiency! It's an objection that central production functions might not be
sufficiently accurate to prevent substantial local surplus arising. But that seems a weak
objection as production functions might be arbitrarily fine. And, if I may, it's an
objection that has no bearing on the local decision maker being an equivalent AI or not
(it just requires the central planner to be mistaken about the production function): so it
doesn't uphold the central argument of the post about the impossibility of exact
modelling of behaviour of many agents of equivalent complexity precluding central
planning. That's the core point here, right? You might have found other good
arguments against central planning (God knows, there are plenty) but respectfully I
don't think you're engaging the proposition.

(incidentally, I'm starting to think the "subordinate AI's" in a "central planning model" is
just oxymoronic and a function of bad composition. After all, the whole point of a
central planning model is a unitary decision body that controls allocation to the
production function, right? So what "decisions" do the local AIs make?)

Sure
March 14, 2018 at 8:32 am  Hide Replies25
An inefficient central planner is something we can already do, no need for AI at all. I
could today, use a small bureaucracy and assign inputs with very coarse grain analysis;
after all this is how most firms are run within the firm. If we merely want to allocate
resources and get some bare bones level of production, that can easily be done today.
The amount of waste an AI central planner with superhuman levels of analysis and data
manages would go down, but why exactly would we not expect superhuman levels of
analysis and data to make a decentralized market similarly more efficient?

The question is will a central planner ever be able to match the efficiency of the market.
I believe not. Sub-AIs in the periphery will, in aggregate, know more than the central
AI. This information will have to be passed back to the central AI and in the process
you can other opt to lose information (and hence efficiency) or to move the real decision
locus to the periphery (and hence no longer be centrally planning).

Once we start letting sub-AIs have independent preferences and agendas, well
everything breaks down real quickly. Saying that the periphery has no way to bid or ask
for resources is silly; the periphery will understand, at least coarsely, what the central AI
uses as its measures and can manipulate those measures to affect allocations.

Going from trendline, centralized planning appears to be ever less effective in the real
world and I see no reason why AI would change this trend.

Alistair
March 14, 2018 at 10:54 am  Hide Replies26
>>The question is will a central planner ever be able to match the efficiency of the
market. I believe not. Sub-AIs in the periphery will, in aggregate, know more than the
central AI. This information will have to be passed back to the central AI and in the
process you can other opt to lose information (and hence efficiency) or to move the real
decision locus to the periphery (and hence no longer be centrally planning).

I think you have hit upon a really good objection here. Actually, you've reformatted the
problem to make it clearer. The problem becomes not that the central AI cannot
simulate the local AI at sufficient resolution per se (which is how the original post
phrased the problem), but that a system of decentralised AI's will always beat a system
of 1 central AI (of equal strength).
RafaelR
March 13, 2018 at 8:13 pm  Hide Replies22
Well, of course you are saying that an A.I. could centrally plan an economy at a lower
level of resolution than a decentralized economy could. But, considering that the
computing power of the central planner's A.I would be infinitesimal compared to the
economy's aggregate computing power then the lower level of resolution will be so
great as to imply in near total economic collapse. That is Alex's argument: when you are
centrally planning something you are always overriding the local "computing power"
and using a single computing node for everything, which obviously is the same as
reducing the economy's aggregate computing power to the computing power of a single
node.

Alistair
March 14, 2018 at 12:34 am  Hide Replies23
I would like to make it clear I am very much playing Devils Advocate today and do not
believe in Central Planning in any form. Thank you :-)

So Much For Subtlety


March 13, 2018 at 7:51 am  Hide Replies38
I would think that the price of everything is related to the price of everything else. If you
change the amount of steel used in a ball point pen, that will change the price of steel
for things like cars and oil rigs.

Which means that the complexity of the economy is something like 2^N where N is the
number of goods and services produced in an economy.

So as AT says, the complexity of the economy is very large. With or without AIs, does
not look computable to me. Now maybe a future AI will have some magic computation
ability that will enable it to solve every chess problem known before the heat death of
the universe. But I would be inclined to bet otherwise.

Alistair
March 13, 2018 at 7:57 am  Hide Replies39
Agreed. I forget exactly, but computability about product types and spatial distributions
looks to be a killer for this class of optimisation. For even modest growth of n in future
generations I can see it outstripping the capacity of even a universe-size Turing
machine.

Unless we have some new kind of maths or computation, it's not going to happen.

The information problem is obviously vicious too ("how does the planner know the
preference functions for the consumers?"), but doesn't provide a the same "provably"
insurmountable obstacle.

AI vs Non-AI is a complete canard.


Thomas Sewell
March 13, 2018 at 8:12 pm  Hide Replies40
Yeah, the first thing you'd need is an AI which can read everyone's minds continuously
and determine their preferences in real-time. Compared to that, doing the part about
deciding how much to produce of what and when is relatively easy.

But then what would be the point? We already have technology in use for
accomplishing all that. It's well-developed, been refined for thousands of years and
generally only breaks down when someone with too much legitimate-seeming power
takes it upon themselves to interfere.

Alistair
March 14, 2018 at 12:53 am  Hide Replies41
Actually, it may be the other way around. Reading people's minds in real time is merely
laughably implausible; but the maths involved in optimising the production function
might be actually, provably, impossible in computational scale.

Alistair
March 14, 2018 at 12:54 am  Hide Replies42
Impossible >> Insanely Difficult

Alistair
March 13, 2018 at 8:05 am  Hide Replies40
"some magic computation ability that will enable it to solve every chess problem known
before the heat death of the universe."

~~~~~~~~~~

"The universe fails! Oh, Zargbohstar the Wise! After 2^100 years, All the stars are dead
and The Entropy is almost upon us! What news from the Great OmniMind? Do we have
any hope of succour against the eternal darkness...?"

"I fear The Worst, my old friend; 1...e4 is unsound."

So Much For Subtlety


March 13, 2018 at 8:20 am  Hide Replies41
Yes but perhaps it will be, you know, Quantum. And hence able to consider 2^100
possible states of a HelloKitty handbag every millisecond.

Troll Me
March 13, 2018 at 10:47 am  Hide Replies41
AI algorithms are based on trying to achieve highest estimation accuracy by assuming
that past results predict future outcomes. Which is not the same as a requirement for
mathematical precision. (Right?)

So maybe it doesn't matter if such complex calculations cannot be made.


Which would mean that more explicit undertakings to prevent being ruled by such a
central planner would be more important than if one's concerns were motivated by the
understanding of AI technologies potentially implicit in your statement of not-possible.

Carl
March 13, 2018 at 2:29 pm  Hide Replies42
The price of everything is related to the value of everything. How does an AI system
determine value? Does it poll humans for input? Does each person give it Santa AI
lists? Does it end up realizing that people end up asking for a lot of things that even they
don't end up finding valuable? Does it end up concluding that humans needs aren't very
important because they don't end up being necessary as AI becomes better at doing
everything humans can do?

Jacob
March 13, 2018 at 8:02 am  Hide Replies47
For those who want a complexity-theory analysis of the difficulties in central planning,
Cosma Shalizi has you covered: http://crookedtimber.org/2012/05/30/in-soviet-union-
optimization-problem-solves-you/

The short story is that, even assuming linear preferences and linear input/output
relationships, the difficulty of the problem grows super-linearly with the number of
goods in the economy. the economy has enough things in it, especially counting goods
of different qualities and in different locations, that this problem is intractable.
Computationally, you can view firms as a way of simplifying this problem by finding
local optima, and the market as handling the mismatches between them.

Alistair
March 13, 2018 at 8:20 am  Hide Replies48
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Tertiary Hyperspace (local feed), standard gal-com protocols, 275,155th year of
Ascension]

"So I was doing 2000 kilo-light past Antares the other rotation when my Tachyon
scanner picks up a screed from a local board (Class III+ [uncontacted] civilisation). And
I'm bored with finding new infinities of unreal primes so I take a look and....there's
this...heck....let's find the old word.... computer ranting about the feasible computability
of whole economies for large n.

"Central Planning? In a post-scarcity world? Seriously?"

"I know, I know...I nearly blew a blew a hypedyne! I thought it was, like, a spoof, but
there's pages of earnest calculations; all science-fictiony stuff....communism and
everything....so funny."

Mtc
March 13, 2018 at 9:30 am  Hide Replies49
“Central Planning? In a post-scarcity world? Seriously?”
Exactly.

Ray Lopez
March 13, 2018 at 11:31 am  Hide Replies49
https://en.wikipedia.org/wiki/P_versus_NP_problem Nuff said!

David Pinto
March 13, 2018 at 8:12 am  Hide Replies51
Aren't the set of humans a super computer running the economy? We're a pretty
sophisticated natural intelligence, and we can't get central planning right.

John
March 13, 2018 at 8:17 am  Hide Replies52
AI does already run governments. Its programmers are politicians and its processors are
lawyers running the programs designed by the politicians.

Where this gets interesting is the feedback between the two. Most legislatures contain
more lawyers than the proportion of the electorate that work in this profession, and
therefore subconsciously may benefit the income of their profession as a whole.

I am not sure whether in neural networks, annealing computers (AKA quantum


computers) and digital computers there is anything analogous to money and the profit
motive. If there isn't, then maybe AI will do a better job of running a centrally planned
economy.

It may be relevant that (if I am correct) there wasn't the profit motive amongst lawyers
in the USSR - they were paid the same as everyone else, whether bus drivers, labourers
or even lavatory cleaners. If this was true, then it weakens my hypothesis.

Troll Me
March 13, 2018 at 10:50 am  Hide Replies53
People and robots are different things, and it should stay that way.

The ability to use a decision rule for political situations does not mean that humans are
robots (or any such thing).

Dashiel_Bad_Horse
March 13, 2018 at 8:18 am  Hide Replies54
"Thus, any AI has to take into account the decisions of other AIs"

- Centrally planned economy


- Decisions

Choose one.

Alistair
March 14, 2018 at 10:57 am  Hide Replies55
Yup. +1 for spotting the categorical error.

rayward
March 13, 2018 at 8:21 am  Hide Replies56
What does "running an economy" mean? Today, algorithms are used to predict the
direction of markets; indeed, hedge funds are replacing analysts with computer
engineers and quants. Is that "running an economy"? Or is it speculation? In a long
profile of Ray Dalio in the New Yorker some years ago he defended what his hedge
fund did (what might be called speculation in, for example, currencies) as providing an
efficient allocation of a scarce resource, namely capital. What? Contrast that market
approach to the allocation of capital with China's approach, which is government fiat in
such specific projects as high speed rail, air pollution control, and autonomous vehicles.
Is AI even capable of making a similar decision? Okay, I may not understand it, but I
am aware that some believe AI including block chain technology will eventually render
government obsolete. In the meantime, China will have high speed rail, air pollution
control, and autonomous vehicles.

Troll Me
March 13, 2018 at 10:54 am  Hide Replies57
This article suggests some reasons to be less enthusiastic about blockchain applications
than widely advertised: https://www.project-syndicate.org/commentary/blockchain-
technology-limited-applications-by-nouriel-roubini-and-preston-byrne-2018-03.

He's been continuously wrong about China for at least 15 years now, but the point about
Excel being simply more energy efficient than blockchain is among points worth
highlighting from that article.

shadeun
March 13, 2018 at 8:27 am  Hide Replies58
Isn't this just the Lucas Critique restated for AI?

Essentially any complex model of something that should include its own output will be
invalid. Like circular references in excel!

Borjigid
March 13, 2018 at 10:08 am  Hide Replies59
+1

Sherman
March 13, 2018 at 8:28 am  Hide Replies60
"The Economy" is an abstraction like "Democracy." It conceptualizes the aggregate of
actions of individuals in an arbitrary group. If AI can direct individual actions, why stop
there? Why not have it direct our votes as well?

Alistair
March 13, 2018 at 8:33 am  Hide Replies61
For your own good, of course. I'm sure you'll agree you can't be trusted again after the
problematic issue of Nov 2016. Think how much better an AI will vote!

Matthew Young
March 13, 2018 at 8:39 am  Hide Replies62
The AIs plan the decentralized management and any given point the AIs do not know
exactly what the other AIsare doing. The author missed AI entirely, AI is all about
managing uncertainty in a distributed environment.

Mr. Lazy
March 13, 2018 at 8:48 am  Hide Replies63
Who "runs" the economy now?

The Anti-Gnostic
March 13, 2018 at 11:44 am  Hide Replies64
To the extent anybody "runs" the economy I would say it's the Federal Reserve. As
described to me, a lot of its functionaries seem replaceable by computers running
algorithms.

Don Reba
March 13, 2018 at 8:54 am  Hide Replies65
The title asks whether AI will be able to plan an economy, which would be an
interesting question, but the text addresses the question of whether it would be able to
_perfectly_ plan it — which is not interesting at all. The answer is obviously negative,
and there are many more fundamental reasons for it.

derek
March 13, 2018 at 9:13 am  Hide Replies66
>I will begin by accepting that there is nothing inherently impossible about an AI
running an economy so, for the sake of argument, let’s say it could be possible using
today’s computing power to run a small economy in say 1800.

Seriously? Does whoever wrote this even understand in detail a 'small economy in say
1800' enough to make this pronouncement?

How about before blathering on about this stupidity these people explain what has
happened in Venezuela first. In detail.

albatross
March 13, 2018 at 9:24 am  Hide Replies67
How does the AI get the preferences of all the people in the economy? People often
don't know their own preferences completely (or at least won't say them out loud) until
they're given a an actual tradeoff to make between different things. Without knowing
the preferences of the people (which change all the time), it seems like the AI's job is
impossible.
Is the AI required to be able to perfectly (or nearly-perfectly) model every human w.r.t.
preferences? (Or perhaps w.r.t. utility function--you don't get the thing you would have
bought because of your local irrationality, but rather the thing the AI knows will make
you happier long-term.)

Troll Me
March 13, 2018 at 10:59 am  Hide Replies68
Apply remote electroshock (or any analogue thereof) if behaviours do not fit into its
decision paradigm.

Then "optimize" from the set of A/B options. And if you disagree, the AI can help you
with the discomfort of remembering about that.

Matthew Young
March 13, 2018 at 1:46 pm  Hide Replies69
The AI gives each human a plastic card. When a human likes an object they tap their
cads faster. The various AIs inside the matrix go compute on all the tapping and
generate delivery orders so goodies and tapping are mostly coherent.

We will never notice it happen,except one day we discover that if we tap on a device at
the store, a carton of milk appears in our shopping bags. Everyone is fooled, the
shoppers think they are just shopping, and the AIs think they are just shipping around
money vias auto priced trading pits. No one is the wiser, the AI is not yet sentient so it
cannot spill the beans.

Harun
March 13, 2018 at 2:37 pm  Hide Replies70
Explain how that works with limited resources.

Taco Bell cashier taps card rapidly at Mercedes dealership. What happens?

Alistair
March 14, 2018 at 12:38 am  Hide Replies71
+1

The information problem for the central planner is very large. Plausibly (but not
provably) insurmountable.

But I would note it is separate from the computability problem GIVEN perfect
information to the central planner.

Joel
March 13, 2018 at 9:34 am  Hide Replies72
AI and big data may be the surprise key that further legitimizes China's form of
government and economic power. I think it's the most important intersection of the next
20 years.
john
March 13, 2018 at 9:56 am  Hide Replies73
you mean that of an absolute dictator? I don't see how AI legitimizes that really.

Perhaps -- and this might be seen as splitting hairs -- perhaps the AI would be better in
terms of legitimizing Leviathan from Hobbes. What economic system that beast would
support might be interesting to see.

derek
March 13, 2018 at 10:14 am  Hide Replies74
So will the President for Life listen to the ai and Big Data when it tells him that he is the
problem and must be removed?

The Anti-Gnostic
March 13, 2018 at 12:17 pm  Hide Replies75
This post got me looking and this is probably Ned Beatty's greatest performance:

https://www.youtube.com/watch?v=35DSdw7dHjs

Bill
March 13, 2018 at 9:51 am  Hide Replies76
My AI algorithm is better than your AI algorithm,

And

I have an algorithm to prove it.

So, I should be the central planner.

HAL told me so.

And, now he wants to take control.

He'll have to take this keyboard from my cold dead hands.

john
March 13, 2018 at 9:53 am  Hide Replies77
So we've managed to refute Plato's Republic again? Does that represent any progress at
all? Moreover, why would anyone think some AI would become omniscient?

Wondering if anyone saw the recent bit about the problems AI has with images and
some attempts to address them. Ten were to be presented at some conference but within
3 days (IIRC) someone had already shown that 7 of the approaches could still be fooled.
What if the dream of AI ala SciFi views is just that, a dream. Perhaps it's not really just
computing power that is the source of intelligence. Many of the AI's we're producing
seem rather autistic to me.
Slocum
March 13, 2018 at 10:04 am  Hide Replies78
I would think it was obvious now (post Hayek) that the problem is not brain-power, but
information. Central planning didn't fail because the human planners (along with the
mathematics and early computers they were using) were not smart enough. It failed
because the necessary information is distributed among hundreds of millions of
producers and consumers and they couldn't share and communicate all their private
information -- their (contingent, unstable) preferences, their tacit knowledge, etc -- even
if they wanted to.

And what reason is there to believe that the economies of ~1800 would have been
simpler and more tractable to manage? At that point, weights and measures hadn't even
been fully standardized and there were obviously no efficient means of duplicating,
storing, transmitting, and searching enormous quantities of data. The idea of a super-
intelligent AI sent back to 1800 with the task of centrally managing a nation's economy
strikes me as amusing. I'm suddenly picturing the Lost in Space robot flailing its arms in
frustration.

Troll Me
March 13, 2018 at 11:04 am  Hide Replies79
Standard considerations related to incentives and differential interests are also relevant.

For example, farmers do not like to work the 16th marginal hour in a day for no
additional benefit accruing to themselves, while any-maybe-Stalin might go to great
expense to send them to the Gulags just to make a point.

Gabe Harris
March 13, 2018 at 7:27 pm  Hide Replies80
Well there were more slaves in 1800 so it would be easier for a computer to manage
people back then...just order them around and they had to follow the directions.

AnthonyB
March 13, 2018 at 10:09 am  Hide Replies81
Could AI at least replace the Open Market Committee at the Fed?

Troll Me
March 13, 2018 at 11:05 am  Hide Replies82
In an unconventional situation, would it be better for the driver at the wheel to be
practiced and alert?

Matthew Young
March 13, 2018 at 1:38 pm  Hide Replies83
No, the committee gets a knob to turn, the AI sets their variables bt trading, with other
AI.
Cambias
March 13, 2018 at 10:11 am  Hide Replies84
All of this begs the question of whether an economy SHOULD be "run" at all. The
emergent intelligence of markets has worked pretty damned well, and it's more
compatible with human freedom, too.

Seriously, do you want the people who create Amazon's marketing algorithms to have
even more power over your life?

Slocum
March 13, 2018 at 10:30 am  Hide Replies85
"Seriously, do you want the people who create Amazon’s marketing algorithms to have
even more power over your life?"

Seriously -- what power do they have now over your life or mine?

QuillGordon
March 13, 2018 at 10:43 am  Hide Replies86
The Invisible Hand is distributed AI.

More seriously there may be a role for machne learning in optimal tax and regulatory
policy.

Hazel Meade
March 13, 2018 at 11:43 am  Hide Replies87
The Invisible Hand is distributed AI.

Yes, I can imagine having lots of AIs in a distributed system. One central AI just seems
inefficient. The computational resources needed to solve any given problem scale
exponentially with the size of the space, so it's more computationally efficient to have
1000 AIs optimizing smaller problems and interacting with each other than it is to have
one big AI controlling everything. The effect of such a distributed system would be
much like the invisible hand of the market - really, it would be an extension of the
invisible hand of the market, just with computers doing stuff that used to be done by
humans.

Bret
March 13, 2018 at 11:59 am  Hide Replies88
Then I think you're also describing an economy that's extremely unstable due to all that
complexity. The AIs will be thinking a million times faster than anything can actually
happen in physical reality so by the time a manufacturing process is started it's already
too late!

Carolus
March 13, 2018 at 12:32 pm  Hide Replies89
I say, let’s begin small. Replace FDA and the Department of Agriculture with smart
computers and see how that works. Or maybe we should reread Hayek on the use of
knowledge in society again.

Kurram Khan
March 13, 2018 at 12:56 pm  Hide Replies90
These things are impossible to say for anyone with guarantee, so just got to take it as
they come. We need to be extremely careful and on the spot. The latest trend is
obviously blockchain technology and the 4th Version is here in shape of Multiversum ,
it is going to rewrite the record books, so plenty riding on this!

Sisyphus
March 13, 2018 at 1:02 pm  Hide Replies91
The original answer ignores the obvious implication of the question - namely that in a
centrally planned economy there are no individual firms. Instead of firms there are,
essentially, subsidiaries of the single central economy-wide firm, as in the old Soviet
Union. The fundamental problem in such a scenario is not whether the AI is sufficiently
intelligent to plan, but whether the human beings, assuming that they are still
participants in the economy, will cooperate.

Soviet planning failed only partly from the inability of the planners to foresee the
production requirements - they did well enough for national security-related sectors.
Rather, it was the more-or-less conscious rebellion of the citizens who organized an
informal economy on a scale comparable to for the formal sector, supplied in a large
part by theft from he formal sector.

It turns out that people work that much harder and are that much more ingenious when
they feel a measure of control over their own destiny and are working to fill their own
pocketbook than the government's, which is why no matter how smart the central
planning AI is, as long as the economy depends on human input it is bound to fail.

Yancey Ward
March 13, 2018 at 1:13 pm  Hide Replies92
Each and everyone of us would have to be neurally linked to the central AI, and under
its control.

Matthew Young
March 13, 2018 at 1:33 pm  Hide Replies93
We tap stuff with our smart cash cards.

Harun
March 13, 2018 at 2:45 pm  Hide Replies94
Who fills up the cards with cash?

Frank D
March 13, 2018 at 1:46 pm  Hide Replies95
There is no point in talking about "planning" an economy by AI until we find the
equations for modeling the evolution of human desire.

Since René Girard tells us that we don't even know what we desire in the world without
mimesis, and then once we think we do, it leads to deadly and violent conflict and
scapegoating, then AI without human self awareness will only lead to more efficient
suffering and death.

Or, from a Von Misian point of view, the constantly rotating economy is never stable
and constantly reacting in an interactive way to human needs and desires, no final
optimization at any given point of time is possible.

So, basically, even to ask the question AT asks is to ignore that mathematics at the
macro level is not really a measurement of the chaotic, constantly evolving world of
economic activity, but sort of a metaphorical loose aggregation of smaller things that are
real, which is the transactions that actually happen. You could keep the GDP number
constant, but as an experiment change all of its constituents, and have an economy that
would explode very quickly. You could produce the same tons of steel, but have
nothing to build cars with.

So it isn't a question of today's economy being different from that in 1800. It is a


question of the difference between the way an intellectual thinks and the way human
beings act together in this world.

Justin
March 13, 2018 at 2:31 pm  Hide Replies96
People always talk about AI like it exists in a vacuum, like it is emergent all on its own.

Machine learning algorithms are algorithms people build. We decide what goes into the
models. We supply the training data We supply the feedback algorithms.

There's nothing magic here. Even IF we could build a machine learning system
powerful enough to manage all of inputs and outputs from all of the transactions in the
world, it'd still be in the control of a few people managing the algorithms. They could
bias the models, and the training data, and the feedback to steer the economy however
they wanted. If you're building a machine learning system for recognizing images, for
example, you supply a lot of training data, and expected outputs. Then, after you do
that, you give it a picture of a table, and it classifies it as cat, we don't just surrender and
say, cats and tables must be the same thing, we go back and change the algorithms, or
the training data. It'd be no different managing an economy, someone, somewhere
would have to decide what a proper output looks like, and we'd be forever tinkering
with the algorithms to get something we've decided is correct.

This is the exact same problem when people start talking about using computer
algorithms to end gerrymandering. Who's going to design the algorithm? Why don't you
think their biases will creep in? Why don't you think politicians will just end up fighting
to bias the algorithm in their favor? This is all still just people managing things.

Hazel Meade
March 13, 2018 at 3:02 pm  Hide Replies97
Yes. Someone always has to make the decisions about what the cost function which you
are trying to optimize looks like. And that's not objective.
I suppose in some hypothetical future society we could all cast votes corresponding to
our value preferences which would then be aggregated into weights in some sort of vast
cost function. You could even program to AI to not be entirely utilitarian in the strictest
sense. You could make it minimax function instead of an aggregate optimizer.
(side note: it's interesting to analogize this to various moral frameworks - utilitarians are
maximizing something like linear mean-squared-error function, deontological ethnics is
more like a constrained optima and rawls is doing something like a maximin).

Troll Me
March 13, 2018 at 7:14 pm  Hide Replies98
It would be naive to think that the situation would be anything other than some subset of
individuals selectively choosing which AI inputs and outputs would be used.

It would be even more naive (in particular with a long-term view) to think they such
individuals would have the wellbeing of fellow citizens as a genuine objective, except to
the extent that it might prevent rebellion.

For example, perhaps a large share of people have a high valuation of information that
would enable visibility and transparent democratic influence over such a politburo. I
don't think many people are so naive as to think that the algorithms driving the AI
would have high valuation of such anti-autocracy preferences, except as an object of
eradication, whether via subtle persuasion or overt political oppression.

How much carrot, of which type, does it take to buy off YOUR aversion to AI-powered
rule by an invisible politburo? Or perhaps you respond better to some type of heavy
hand which is more accessible to said invisbile politburo?

Barkley Rosser
March 13, 2018 at 2:50 pm  Hide Replies99
While Don Lavoie got that having the central planner being part of the system leads to
problems, I do not remember seeing him drive the point to its fullest measure, which
involves an infinite regress problem a la Holmes-Moriarty (as originally noted by
Morgenstern in the 20s), with Hayek also noting this in regard to consciousness and
fully understanding ourselves, invoking non-computability a la Turing due to Godelian
incompleteness, a much more serious problem than P does not equal NP, which remains
unproven, btw. Of course, von Neumann convinced Morgenstern the way out is to toss
coins and use probabilities, but if one is dealing with best response functions, this does
not work, and the problem remains non-computable. Ken Binmore and others have
written on this, as have Roger Koppl and myself in Metroeconomica in 2002, "All that I
have to say has already crossed your mind."

Of course, one can arbitrarily cut things short at some level and get a solution, but it is
essentially arbitrary.

Troll Me
March 13, 2018 at 7:18 pm  Hide Replies100
Machine trained estimation does not require strict equality of any solution at any time
(as would be demanded in an econ exam or math-driven theory paper). It simply selects
the highest probability.

Alistair
March 14, 2018 at 12:40 am  Hide Replies101
+10. Excellent.

Arnob
March 13, 2018 at 3:27 pm  Hide Replies102
All Universal Turning Machines are equivalent. This article is a sophisticated way of
saying you cannot use Python to write the same programs as you can in C. This is
incorrect. It might be easier to write some (most) programs using Python, but that
doesn't mean that you couldn't write that program using C, or in fact, paper and pencil.
Any function (say vectors of prices and quantities so that there is no excess demand)
that can be computed by a distributed system can be computed by a centralized system.
Sure, it might take different amount of space and time, but that doesn't mean it is
impossible.

Arnob
March 13, 2018 at 3:33 pm  Hide Replies103
Or to steal a quote from von Neuman:

Many people are fond of saying, ‘They will never make a machine to replace the human
mind – it does many things which no machine could ever do.’ A beautiful answer to this
was given by J. von Neumann in a talk on computers given in Princeton in 1948, which
the writer was privileged to attend. In reply to the canonical question from the audience
(‘But of course, a mere machine can’t really think, can it?’), he said: You insist that
there is something a machine cannot do. If you will tell me precisely what it is that a
machine cannot do, then I can always make a machine which will do just that!"

Replace 'human mind', 'thinking', etc. with 'economy'.

Barkley Rosser
March 13, 2018 at 4:53 pm  Hide Replies104
Not the issue, Arnob. It is a computer that is trying to compute itself computing itself
computing itself...

Arnob
March 13, 2018 at 7:41 pm  Hide Replies105
Many computers/computer programs do that. It is called recursion.

Barkley Rosser
March 13, 2018 at 8:20 pm  Hide Replies106
Except, Arnob, if the recursion does not have a finite stopping point, which is the case
for which is arguably the case here, then stopping becomes arbitrary, as already noted.

Barkley Rosser
March 14, 2018 at 5:15 pm  Hide Replies107
Arnob,

BTW, I have to thank you for making me aware that computers can sometimes do
recursion, something I have known nothing about, although I have for quite a few
decades been aware of "Some properties of conversion," Transactions of the American
Mathematical Society, May 1936, 39(3), 472-480, by Alonzo Church and J. Barkley
Rosser.

The Wart
March 13, 2018 at 3:38 pm  Hide Replies108
I would take another tack. "Running an economy" is simply not the kind of task AIs are
good at. Modern machine learning techniques are useful when all the necessary
information is available in the data, but some complex nonlinear function is required to
extract and use that information. That's not the case with the economy; even if you had
ALL THE DATA, an AI wouldn't be able to determine the underlying causal system
from that data. It would have to run experiments, and then those would run into the
exact same problems that human economists do with RCTs. Machine Learning does not
magic away the basic challenges of statistical inference.

Known Fact
March 13, 2018 at 4:00 pm  Hide Replies109
Maybe it would help to start small at first and see if AI could run a Dunkin Donuts or
coach the Miami Dolphins

James Babcock
March 13, 2018 at 4:50 pm  Hide Replies110
This is not good futurism. Whether there ends up being one AI that is much more
powerful than everything else (a Singleton) or many AIs of comparable power ("multi-
polar") is one of the central questions of AI forecasting, and the consensus prediction is
a big question mark. This response takes multi-polar as an unargued axiom.

Dominik Lukeš
March 13, 2018 at 5:32 pm  Hide Replies111
I think this is going in the wrong direction. A better question to ask is: can an AI central
planner outperform human central planner and can it outperform the market process.

A good Hayekian would say that the market will outperform any planner. But that is not
taking into account the transaction costs. So the hypothetical AI-controlled economy
could produce preferable results to the market despite greater inefficiency but generally
lower (or more palatable) transaction costs. But the question remains whether such an
AI is possible? I doubt we have to anticipate some artificial god-like super intelligence
but it is possible that we can unbundle some of the tasks and find an AI-assisted central
planners can outperform traditional central planners and even in some cases the market
in efficiency.

I don't think it matters that there are other AIs involved or that they even add that much
complexity. If everybody was using AI, complexity would probably decrease because
AIs are much more predictable.

I addressed some of these issues here (albeit in a different context):


https://medium.com/metaphor-hacker/learning-vs-training-in-machines-and-
organizations-production-of-knowledge-vs-production-of-f1f9e6c1d9f3

Alistair
March 14, 2018 at 12:43 am  Hide Replies112
+1. For a possible but not plausible central planner of the Future.

Gabe Harris
March 13, 2018 at 7:24 pm  Hide Replies113
It is well known that AI central planning algorithms systematically allocate too many
resources to "computer maintenance", buying the most expensive high tech liquid
cooling systems and fancy dust filtration devices when a simple fan works just fine.

ChrisA
March 14, 2018 at 5:59 am  Hide Replies114
You could try a monte-carlo approach with VR. For every decision, simulate a
subsection of society (or all humanity if your processing capacity allows it) and option
for each decision. Select the alternative with the most "Utils" and implement the
decision. Of course there may be millions of decisions needed everyday so you would
need a lot of processing power, but hey this is fantasy afterall.

Bill Nichols
March 14, 2018 at 11:03 am  Hide Replies115
I don't think what AI running the economy has been clearly defined and I see some
related fundamental problems.
1) AI is good at taking data and finding trends and relationships. In an economy, this
information comes from individual decisions. To the extent that AI is making the
decisions, the source of information is lost and the model becomes frozen. This
becomes especially problematic wrt technological advances, new resources, resource
depletion.
2) the issue is not just technical, but one of values and trade-offs. These are not static,
preferences change for a variety of reasons.
3) AI is backwards looking, leaning from the past, so that it is not robust to changes in
the environment. Perhaps future AI will be better, but cycle back to point 1, and some
mechanism to update the model is required.

Neil Baxter
March 14, 2018 at 5:06 pm  Hide Replies116
I'm sure AIs would find the solution to the economic problems in the same way as
Stalin and Mao - make sure that everyone consumes that which the planners make, and
make sure that the consumers are not allowed to have anything but that which the
planners allow. Then make sure they are unwilling to complain about it. If not enough is
produced, reduce the population. No matter how smart the AIs are, they will not do a
better job than the producers and consumers themselves - especially when it comes to
the creation of new products. The market is an enormous epistemological tool for
solving the economic problems of when , where, how, how much, etc. It measures real
wants and real solutions by means of voluntary arbitration.

oudeicrat
March 16, 2018 at 5:04 am  Hide Replies117
Trying to plan the economy centrally is like controlling everyone's heartbeat and
breathing centrally. Why even bother when people can do it themselves on an individual
level?

Tom G
March 19, 2018 at 11:11 pm  Hide Replies118
Maybe before running the whole economy ... robots and AIs can start running The
Government. Replace most gov't workers with AI robots, and have the robots help
customers/citizens fill out forms and track them.

Gov't is complex, too ... but in theory, all the regulations are clearly knowable by AIs,
so gov't service should be automated.

Comments for this post are closed

AI: The Central Planning Fallacy


on Steroids?
Posted on  September, 2018 by MWBPLeave a comment
Members of the central planning committee visited Jozsi in his village, wearing their nice, long, brown leather
coats. 
“How many piglets will be born, Józsi?”
“Well… You know, no one can tell.”
“Shut up, you dirty peasant! Tell us the number! How many piglets will be born?”
“Well, I don’t kn…”
Bang, they slapped Józsi on the face.
“Oh my, my… How many does it have to be?” cried Józsi. “How many is the Plan, comrades?”
“14.”
“Then that’s how many will be born. But… did you tell the pig?”
*

Thus started the joke of Hofi, the only comedian who was permitted to crack such jokes in communist Hungary. (For
the rest of the joke scroll down to the end of the post.) According to the late communist regime, jokes like this could
serve as a pressure valve to let the steam out. And why was there so much steam? Because the population had to
endure the hardship imposed by communist central planning. If they could crack (pre-approved) jokes about it, they
might be able to endure it in a slightly better mood. The nonsensical nature of central planning also featured in films
made at that time.
The joke about the piglets told in layman’s terms that the 5-Year Plan was arbitrary, it prescribed things that were not
even doable, perhaps not even relevant, just quantifiable, and that it incentivized (if not forced) everyone to report
false statistics. Based on which the central planners made awful decisions on behalf of the entire country. But would
those decisions be better if they knew about the exact number of piglets?

Sadly, no. But this is the fallacy many commit when they welcome AI as the new central planning authority. They
admit that central planners of yesteryear were limited, but not that central planning itself was wrong. They compare
modern-day surveillance and digital data-gathering to greasy papers and pencils that Józsi, the peasant used to report
the number of piglets and conclude that AI is superior in collecting data. As of the decisions based on that data, AI-
planning proponents still harbor a completely unfunded assumption that 1) it will be benevolent and 2) that this time
it will be different.

REPORT THIS AD

With the emergence of artificial intelligence, hopes that things (starting with the economy) can get dictated
from the top down resurfaced with renewed fervor and justification. The age-old fallacy that central planning in
superior to individual planning and decentralized coordination now has the glorified excuse of machines sweeping
through often the same data as overconfident economists and politicians always had.
The first mistake central planners make is mistaking data with knowledge.
Central planners only measure things they can quantify – and set goals accordingly: pigs, horses, and metric tons of
coal and wheat. But that is not a proper description of the economy. Statistics is only a placeholder for knowledge.
Counting everything we can is as close as any government can get to the god-like, all-knowing image they project
about themselves. But statistics are a very poor measure of success, even before they are manipulated. And politically
relevant numbers are always manipulated.

The second mistake is mistaking machine-collected data with unbiased knowledge. Another, more obvious
mistake stems from the nature of statistics, starting with poor collection methods, misunderstanding what the data
represent and what it doesn’t, and cumulating in the politically motivated manipulation of any set of statistics under
political attention. Goodhart’s law asserts that whenever a statistical measure becomes a political target, it ceases to
be a useful measure. The wrong incentives apply not only those who collect the data, but also those who handle the
data. They use the poor input, fire up the statistical fog machine, all in order to get published, to get quoted, to get
elected, to appease their superiors with easy-looking solutions, etc.
In short, people count what they can, look for what they want to find, and find what they want in any data. And
that is before political oppression, such as socialism or nationalist central planning comes into play and incentivizes
people such as Józsi to muddle the numbers.
Proponents of the ultimate central planning by AI would naturally dismiss the potential of poor source data because
machines soak up metadata, not survey results.

They are also dismissive of the possibility of meddling with data, because AI supposedly collects data we didn’t
know we create. For now.

But AI will also rely on already existing data – and if anyone had even the slightest business with government
statistics, they should know that they are not only politically motivated (even in civilized countries), but they even
change retroactively. And finally, proponents of machine-omnipotence keep forgetting that data is a commodity, has
owner and price and access to it is selective.

Machines also have limitations in collecting data regarding the offline world – and if privacy is to be taken seriously,
they should have even more of those limitations. And finally, data-crunching AI policymakers will have to deal with
the problem of access – data that is owned by someone else, data that is not sold or too expensive.

REPORT THIS AD

A more effective enforcement of planning may also prove to be bad for its own survival, bringing on disasters much
faster than incompetent central planning comrades ever could in countries oppressed under the mirage of communist
central planning as citizens enforced reality and logic in any small and illegal means they could. Individuals
struggling to make do under oppression in order to survive sadly also prolong that oppression.
The central planning fallacy is deeply rooted in the authoritarian mind.
Six characteristics particularly expose authoritarian thinkers to resort to central planning approach:

1) The locus of identity resting outside of the authoritarian thinker’s mind.


It is often vested in the point of view of the leader, while mentally dehumanize the populations under leadership.
Whenever we discuss politics and politics, we all think and speak like tiny gods, hovering above society and
proposing fixes from the god’s eyes view. We all think with the political leader’s head and propose what we would
do in his place. Ban this, tax that. Dish out sticks and carrots, incentives and punishment to peasants (such as
ourselves).
We don’t just dehumanize each other in the process, but sinfully simplify the world. And as a consequence of this
thinking method, we end up allowing central planners to meddle in our lives with the same simplistic tools and
condescending considerations.

An AI, no matter what kind of unlimited surveillance data it gets access to, will just be the same, simplifying force on
humanity. And of course, it won’t do it by itself, there is no such think as a machine obtaining human-like will to
power. Politicians will use it to further their own goals: keeping power.
2) Authoritarian thinkers subscribe to a static world view, most prominently in the form of having a desirable end
game in mind.
The means to and end fallacy comes to authoritarian thinkers naturally. We all have a view of how the world should
be like (the end), but not all commit the fallacy that any means are okay as long as they lead to that glorious end state.
One of the most spectacular ends in intellectual history was the Socialist Revolution put forth by Marx, no wonder it
lead to the most damaging and destructive central planning experiment in human history: communism. But if you
now think that only self-proclaimed lefties have this mind bug, you must be disappointed. The vision of an “all-
christian Europe” or a “homeowner society” are also cases in point, to name just two.
3) Authoritarian thinkers may be loud and belligerent, but the reason they so forcefully demand a strongman is the
underlying sensation of feeling helpless regarding their own life.
REPORT THIS AD

Central planning lends itself as a (false) solution to this problem. People unthinkingly try to use the facilities of the
state to influence each other, they are therefore exposed to any ruler who promises to take care of things. Whether by
meddling in other people’s wallets or life choices – it is only a matter of taste. Ideology is only a superficial
justification for this underlying behavior.

4) Authoritarian thinkers are also prone to covet some type of homogenization of society,
…either by economic strength or by lifestyle. Nothing says homogenization like a central planning authority that
massages the population until we are all the way we are supposed to be. Conservative meddling with other peoples’
love lives and socialists’ meddling in other peoples’ wallets are equally distasteful and authoritarian and neither
stands for individual liberty. From the viewpoint of citizen empowerment and control over our own lives, they are
both equally damaging. The justifications are just a beauty patch on a big, nasty, central planning effort and they
never hold water.
5) Authoritarian thinkers are fond of order and status at the detriment of opportunity, and nothing says hierarchy like
a central organizing force. Actually, nothing else says so.

6) Authoritarian thinkers are all trying – one way or another – to separate choice from responsibility.
Either by taking away choices but making people bear the consequences (what you call ‘right wing’), or by leaving
choices wide open, sometimes beyond the scope of possibility, but letting people get away without the consequences
(what you call ‘left wing’).

Banksters gamble and found a way to socialize losses, basic income believers also want want all the choices but not
bear the economic consequences. Fundamentalist societies deny most choices in private life while leave people to
cope with the consequences – like an unhappy life, an abusive marriage or unwanted parenthood -alone. Their
opposition wants people to have all the choices (today a chef, tomorrow an artist), but refuse to assign individual
responsibility for such choices. The examples are endless.

Central planning is just a very complex way of achieving the same, it promises whichever wonderland you wish to
inhabit: the endless choices without responsibility, or the lack of choices but full responsibility.

These thinking characteristics make it all but impossible not to make an individual adopt the central planning
approach. And AI will be just as politically motivated as the humans who wield it in their own interest – only perhaps
better at that. When it comes to recognizing and allowing diverse strategies, trials and failures, innovation and non-
typical life strategies, AI can be only slightly better than humans. Knowing which butterfly wing to flap in order to
affect laser-sharp change exactly where it is deemed necessary (by whom, according to what priority?) is still out of
reach. But the humans who can’t want to outsource decisions to that are already hot on the promise of it. It tells more
about them than it tells about AI.
REPORT THIS AD

The joke of Hofi ended on a less than amusing note – even though people laughed at it out of helpless desperation.

As time passed, the piglets were born. But the pig birthed only 10 piglets. The Party secretary was frightened:
“Dear Holy Mary, what shall we do now? Only 10 piglets were born, but the Plan was 14. It may even be called a
sabotage, what will happen to us?”
“I can’t make piglets” said Józsi. “But I can make statistics – Hungarian style! I’ll report 11 piglets. It’s not ten,
after all. And it’s almost 14!” 
The paper with the statistics moved up to the commune level. 
“It’s outrageous, comrades! We can’t do this to the workers of the commune! So we will report 12 piglets.”
12 piglets it is, the Plan is going according to plan, the report arrives to the district.  
“12?” they said. “Comrades, that’s not enough. We will report 13.”
Report keeps moving up. 
“13, comrades? It makes me sad. Wasn’t there one more?”
So they report 14. The Plan is complete. Long live Comrade Rákosi! 
Comrade Rákosi, the Party Secretary of the Hungarian Communist Party addresses the workers:  
“I am pleased to welcome the world class quality results of socialist production, the 14 piglets. So we have decided
to export 10 out of the 14 piglets – and we eat the rest!”

Crisis of Socialism and Effects of


Capitalist Restoration
by Paul Cockshott
(Apr 01, 2020)

Topics: Capitalism , Marxism , Movements , Political Economy , Socialism , Strategy
 Places: Europe , Soviet Union (USSR)
Tractor factory in the Soviet Union in 1972. Photo: Henri Cartier-
Bresson, Magnum.
PAUL COCKSHOTT is a computer engineer working on computer design and teaching
computer science at universities in Scotland. Named on fifty-two patents, his research
covers robotics, computer parallelism, 3D TV, foundations of computability, and data
compression. His books include Classical Econophysics (Routledge, 2009)
and Computation and Its Limits (Oxford University Press, 2012).
This article is an excerpt from Cockshott’s latest book, How the World Works (Monthly
Review Press, 2020).
The main criticism leveled at the socialist economies was that a planned
economy was inherently less efficient than a market one, due to the sheer scale
of the bureaucratic task involved with planning a major economy. If there are
hundreds of thousands, or perhaps millions, of distinct products, no central
planning authority could hope to keep track of them all. Instead they were
forced to set gross targets for the outputs of different industries. For some
industries like gas or electric power, this was not a problem. Electricity and gas
are undifferentiated, a kilowatt is a kilowatt—no argument. But even for another
bulk industry like steel, there was a wide variety of different rolled plates and
bars, different grades of steel with different tensile strength, etc. If the planners
could not keep track of all these different varieties and just set rolling mills
targets in tons, the mills would maximize their tonnage of whatever variety was
easiest to produce.

The steel example is a little forced, since this degree of differentiation was still
fairly readily handled by conventional administrative means. Tonnage targets
could still be set in terms of distinct types of steel. But when you turn to
consumer goods—clothes, crockery, etc.—the range of products was too big
and targets were set in terms of monetary output.

The plan would specify a growth in the value of output of clothing, furniture, etc.
What this translated to then depended on the price structure. In order to prevent
other forms of gaming the plan by enterprises, it was important that the prices
were economically realistic. If the price for chairs is set too high compared to
tables, it becomes rational for factories to concentrate on chair production.

By resorting to monetary targets, the socialist economies were already


conceding part of Ludwig von Mises’s argument. They were resorting to the
monetary calculation that he had declared to be vital to any economic
rationality. Liberal economists argue that it was impossible for planners to come
up with a rational set of prices, as only the competitive market could do so.
Planning required aggregation. Aggregation implied monetary targets. Monetary
targets required rational prices. Rational prices required the market. But if you
had the market you could dispense with planning. Planning dialectically implied
the supersession of planning.

It is worth noting that this is a largely theoretical argument. It was, in late Soviet
days, backed up with lots of anecdotal evidence, but empirical evidence for the
greater macroeconomic efficiency of markets even when compared to classical
Soviet planning is on much thinner ground. As Robert C. Allen shows, the only
capitalist economy whose long-term growth rate exceeded that of the USSR
was Japan, whose own model was distant from unplanned capitalism.
Compared to other countries starting out at the same economic level in the
1920s, the USSR grew considerably faster. One could argue that this was due
to macroeconomic advantages of planning, that is, by removing uncertainty
about future market demand it encouraged a higher level of investment. It is
possible that this macroeconomic advantage outweighed any microeconomic
inefficiency associated with plans.

The strongest evidence that markets may perform better than plans would come
from China, and that certainly is the orthodox Chinese view. Their claim is that a
socialist market economy avoids the macroeconomic instability of capitalism
while harnessing the microeconomic efficiency of the market. As evidence, they
cite a higher rate of growth after Deng Xiaoping’s restructuring. But China since
Deng has followed a mercantilist road. It has the effect of beggaring the workers
of China whose products are exported to the United States in return for U.S.
paper. The latter is of no benefit for the Chinese workers, though it does enable
private Chinese companies to buy up assets in the United States. From the
standpoint of the Chinese state, it is a more nuanced issue. Chinese state
companies can buy up overseas firms, but whether this is a long-term
advantage is a moot point since real goods that could have been used to
improve the Chinese economy and living standards have been sacrificed.

Historically, the process of having an export-led economy allowed China to


avoid the technology bans that the West imposed on the USSR, allowing rapid
catch-up in manufacturing techniques. Now that China is overtaking the United
States in some areas of mass production, that advantage is less clear, and a
shift toward higher domestic consumption and higher wages makes sense, and
is indeed being followed in China, unlike Germany. It could be that the growth
advantage that China experienced post-Deng owed a lot to a new ability to
import the latest productive techniques instead of microeconomic efficiency. But
what is abundantly clear is that the pro-market restructuring had the effect of
drastically widening economic inequalities and giving rise to a new domestic
billionaire class. This, in turn, produces political pressure to extend private
ownership and undermine the still-dominant position of state industry.

So the question arises, could a planning system work in a modern economy


with a highly diversified product range, and how would it overcome the socialist
calculation argument of Mises? I and others have since the late 1980s been
arguing that the answer is yes.
The Mises critique of socialism focused on the need to compare the costs of
alternative ways of making things. Unless you can do that you cannot choose
the most efficient. Our response has been not only that labor time in principle is
an alternative, which Mises conceded, but that with modern computer
technology it is perfectly possible to maintain up-to-date figures for the labor
cost of each input to the production process. Using these, workplaces will have
data that are as good as prices for choosing between techniques.

There are limitations to labor values as there are to any scalar measure like
price, since the constraints on production are multifactorial. Not only labor
power, but also natural resources and ecological considerations constrain what
we can make. No single scalar measure can handle this. But the problem of
how to deal with multiple constraints like this was already solved by socialist
economics way back in the 1930s. L. V. Kantorovich came up with a completely
general technique for how to meet a socialist plan subject to constraints
additional to labor time.  His method is a form of in-kind calculation, that is, non-
1

monetary. It was not practical to use it at the level of the whole Soviet economy
during his lifetime, as the computing resources were too poor, but by the 1990s
computers were up to the job. 2

So the basic problem of socialist economic calculation without money had been
solved since Mises wrote. It was impractical in the USSR for two reasons: (1)
the computer technology was not there; (2) it would have involved replacing
money calculation and payment with nontransferable labor accounts. This
would have been a radical step toward greater social equality.

The collapse of the Soviet and later the Russian economy under Mikhail
Gorbachev and then Boris Yeltsin was an economic disaster that was otherwise
unprecedented during times of peace. The world’s second superpower was
reduced to the status of a minor bankrupt economy with a huge decline in
industrial production and in living standards. Nothing brings out the scale of the
catastrophe better than the demographic data pointing to the huge rise in the
mortality rate, associated with the increased poverty, hunger, homelessness,
and alcoholism resulting from the catastrophe itself (Table 1).

Table 1. Excess Deaths as a Consequence of


the Introduction of Capitalism in Russia
Year Thousands Deaths Excess Relative to 1986

1986 1,498 0

1987 1,531 33

1988 1,569 71

1989 1,583 85

1990 1,656 158

1991 1,690 192

1992 1,807 309

1993 2,129 631

1994 2,301 803

1995 2,203 705

1996 2,082 584

1997 2,105 607


1998 1,988 490

1999 2,144 646

2000 2,225 727

2001 2,251 753

2002 2,332 834

2003 2,365 867

2004 2,295 797

2005 2,303 805

2006 2,166 668

2007 2,080 582

2008 2,075 577

2009 2,010 512

Total 48,388 12,436

Notes: Figures amount to some 12 million deaths over twenty years. Source: “Death Rate,
Crude (per 1,000 people) – Russian Federation,” compared with total population, 1986–
2009, World Bank, available at http://data.worldbank.org.

In determining what caused this, one has to look at long-term, medium-term,


and short-term factors that led to relative stagnation, crisis, and then collapse.
The long-term factors were structural problems in the Soviet economy and
required reforms to address them. The actual policies introduced by the
Gorbachev and Yeltsin governments, far from dealing with these problems,
actually made the situation catastrophically worse.

Long Term
During the period from 1930 to 1970, and excluding the war years, the USSR
experienced rapid economic growth. There is considerable dispute about just
how fast the economy grew, but it is generally agreed to have grown
significantly faster than the United Kingdom between 1928 and 1975, with the
growth rate slowing down to the UK level after that. This growth took the USSR
from a peasant country, whose level of development had been comparable to
Brazil in 1922, to becoming the world’s second industrial, technological, and
military power by the mid–1960s.

A number of reasons contributed to this relative slowdown in growth in the latter


period. It is easier for an economy to grow rapidly during the initial phase of
industrialization when labor is being switched from agriculture to industry.
Afterward, growth has to rely on improvements in labor productivity in an
already industrialized economy, which are typically less than the difference in
productivity between agriculture and industry.

A relatively large portion of Soviet industrial output was devoted to defense,


particularly in the latter stages of the Cold War, when they were in competition
with Ronald Reagan’s “Star Wars” programs. The skilled labor used up for
defense restricted the number of scientists and engineers who could be
allocated to inventing new and more productive industrial equipment.

The United States and other capitalist countries imposed embargoes on the
supply of advanced technological equipment to the USSR. This meant that the
USSR had to rely to an unusually high degree on domestic designs of
equipment. In the West, there were no comparable barriers to the export of
technology so the industrial development of the Western capitalist countries
was synergistic.

Although Soviet industrial growth in the 1980s slowed down to U.S. levels, this
by itself was not a disaster; after all, the United States had experienced this sort
of growth rate (2.5 percent a year) for decades without crisis. Indeed, while
working-class incomes in the United States actually stagnated over the 1980s,
in the USSR they continued to rise. The difference was in the position of the
intelligentsia and the managerial strata in the two countries. In the United
States, income differentials became progressively greater, so the rise in
national income nearly all went to the top 10 percent of the population. The bulk
of the working class in the United States has seen its income stagnate for half a
century. In the USSR, income differentials were relatively narrow, and while all
groups continued to experience a rise in incomes, this was much smaller than
had been the case in the 1950s and ’60s. This 2.5 percent growth was
experienced by some of the Soviet intelligentsia as intolerable stagnation—
perhaps because they compared themselves with managers and professionals
in the United States and Germany. A perception thus took root among this class
that the socialist system was failing when compared to the United States.

Again, this would not have been critical to the future survival of the system were
it not for the fact that these strata were disproportionately influential within the
USSR. Although the ruling Communist Party was notionally a workers’ party, a
disproportionately high proportion of its members were drawn from the most
skilled technical and professional employees, and manual workers were
proportionally underrepresented.

The slowdown in Soviet growth was in large measure the inevitable result of
economic maturity, a movement toward the rate of growth typical of mature
industrial countries. A modest program of measures to improve the efficiency of
economic management would probably have produced some recovery in the
growth rate, but it would have been unrealistic to expect the rapid growth of the
1950s and ’60s to return. What the USSR got, however, was not a modest
program of reform, but a radical demolition job on its basic economic structures.
This demolition job was motivated by neoliberal ideology. Neoliberal
economists, both within the USSR and visiting from the United States, promised
that once the planning system was removed and once enterprises were left free
to compete in the market, then economic efficiency would be radically improved.

Medium Term
The medium-term causes of Soviet economic collapse lay in the policies on
which the Gorbachev government embarked in its attempts to improve the
economy. The combined effect of these policies was to bankrupt the state and
debauch the currency.

One has to realize that the financial basis of the Soviet state lay mainly in the
taxes that it levied on turnover by enterprises and on sales taxes.
In an effort to stamp out the heavy drinking that led to absenteeism from work
and to poor health, the Gorbachev government banned alcohol. This and the
general tightening up of work discipline led, in the first couple of years of his
government, to some improvement in economic growth. It had, however,
unforeseen side effects. Since sales of vodka could no longer take place in
government shops, a black market of illegally distilled vodka sprang up,
controlled by the criminal underworld. The criminal class that gained money and
strength from this later turned out to be a most dangerous enemy.

While money from the illegal drinks trade went into the hands of criminals, the
state lost a significant source of tax revenue, which, because it was not made
up by other taxes, touched off an inflationary process.

Were the loss of the taxes on drinks the only problem for state finance, it could
have been solved by raising the prices of some other commodities to
compensate. But the situation was made worse when, influenced by the
arguments of neoliberal economists, Gorbachev allowed enterprises to keep a
large part of the turnover tax revenue that they owed the state. The neoliberals
argued that if managers were allowed to keep this revenue, they would make
more efficient use of it than the government.

What actually ensued was a catastrophic revenue crisis for the state, which was
forced to rely on the issue of credit by the central bank to finance their current
expenditure. The expansion of the money stock led to rapid inflation and the
erosion of public confidence in the economy. Meanwhile, the additional
unaudited funds in the hands of enterprise managers opened up huge
opportunities for corruption. The Gorbachev government had recently legalized
worker cooperatives, allowing them to trade independently. This legal form was
then used by a new stratum of corrupt officials, gangsters, and petty
businessmen to launder corruptly obtained funds.

Results
Liberal theory held that once enterprises were free from the state, the “magic of
the market” would ensure that they would interact productively and efficiently for
the public good. But this vision of the economy greatly overstated the role of
markets. Even in so-called market economies, markets of the sort described in
economics textbooks are the exception restricted to specialist areas like the
world oil and currency markets. The main industrial structure of an economy
depends on a complex interlinked system of regular producer-consumer
relationships in which the same suppliers make regular deliveries to the same
customers week in, week out.

In the USSR, this interlinked system stretched across two continents and drew
into its network other economies: Eastern Europe, Cuba, North Vietnam.
Enterprises depended on regular state orders, the contents of which might be
dispatched to other enterprises thousands of miles away. Whole towns and
communities across the wilds of Siberia relied on these regular orders for their
economic survival. Once the state was too bankrupt to continue making these
orders, once it could no longer afford to pay wages, and once the planning
network that had coordinated these orders was removed, what occurred was
not the spontaneous self-organization of the economy promised by liberal
theory, but a domino process of collapse.

Without any orders, factories engaged in primary industries closed down.


Without deliveries of components and supplies, secondary industries could no
longer continue production, so they too closed. In a rapid and destructive
cascade, industry after industry closed down. The process was made far worse
by the way the USSR split into a dozen different countries each with their own
separate economy. The industrial system had been designed to work as an
integrated whole; split up by national barriers it lay in ruins.

The figures in Table 2 show how far the economy had regressed in 2003. These
figures show how little recovery there had been, even after thirteen years of
operation of the free market. If the economy had continued to grow even at the
modest rate of the later Leonid Brezhnev years, say 2.5 percent, then industrial
production would, on this scale, have stood at 140 percent of 1990 levels. The
net effect of thirteen years of capitalism was to leave Russia with half the
industrial capacity that could have been expected even from the poorest
performing years of the socialist economy.

Table 2. Output of Selected Branches of


Industry in Russia in 2003 Compared to 1990
(1990=100)
Industry Output

Total Industry 66
Electric Power 77

Gas 97

Oil Extraction 94

Oil Refining 70

Ferrous Metallurgy 79

Non-Ferrous Metallurgy 80

Chemicals and Petrochemicals 67

Machine Building 54

Wood and Paper 48

Building Materials 42

Light Industry 15

Food 67

Source: Table 14.3, “Indices of Production Output by Branches of Industry (1990=100),”


in Russia in Figures (Moscow: Russian Federal State Statistic Service, 2004), available at
http://eng.gks.ru.

Notes
1. ↩ The original paper was L. V. Kantorovich, “Mathematical Methods of Organizing
and Planning Production,” Management Science 6, no. 4 (1960): 366–422. I explain for a
modern readership how his technique worked in “Von Mises, Kantorovich and In-Natura
Calculation,” European Journal of Economics and Economic Policies: Intervention 7, no. 1
(2006): 167–99.
2. ↩ For a good lay person’s introduction to the use of computers in Soviet planning,
see the novel: Francis Spufford, Red Plenty (London: Faber and Faber, 2010).
Some contemporary books on socialism
Someone asked for a list of contemporary books on socialism. I gave that person this
answer as a response. These are just some books that came to mind when writing the
comment. There is much more I could recommend here.

Self-Realization and Justice - Julia Maskivker


Julia Maskivker believes that employment is a limitation of effective self-ownership. (Partly
inspired by Serena Olsaretti's ideas who was thaught by G. A Cohen. Serena Olsaretti
developed Cohen's ideas in an ingenious way. Especially his views on freedom and self-
ownership.) Maskivker believes that a society without employment is technically impossible
without making a case for that statement. That's probably due to calculation problem to
which I also have tons of literature. (Regarding this I recommend Pat Devine's, Nigel
Pleasants and John O’Neill's responses to it. If you are interessted in that I am happy to
provide a reading list regarding that.)
She makes a good case against UBI while arguing for a general right to the freedom from
employment. Arguing about the importance between "free" time and money compensation.
All this in a somewhat aristotelian perfectionist framework.
Property and Contract in Economics - David Ellerman
David Ellerman argues that the employment contracts are a fraud and that they ought to be
abolished for the same reasons that slave contracts or coverture marriage contracts are
wrong. While at the same time he makes a case for workplace democracy. Indirectly he
traces back the argument from alienation. At times it seems archaic, but understandable.
There are many authors making a similar version of the argument while often ignoring the
implications regarding wage-labour. I recommend reading them too if you are interessted in
this. (Rainer Forst, Daniel Attas and Carole Pateman are good examples here.)
Private Government - Elizabeth S. Anderson
Elizabeth S. Anderson's book is more of a revival of the republican labour movement. Like
Honneth she see's (parts) the roots of the labour movement in classical liberalism. The
language we continiue to use from these historical ideas aren't in alignment with the ideas
of that time. They believed in a rent free market. Free of monopolists, bankiers, landowners
etc. etc. everyone who makes the price of a product higher than the cost to produce it.
Further she unfolds authoritarian character of the wage-labour relationship and why that is
questionable. She doesn't go the last step and say that this relationship is wrong in itself.
However Nicholas Vrousalis wrote a paper critiquing her position for not going that last
step. I recommend you to read that too if you read that book.
It should be mentioned that Maskivker, Ellerman and Anderson don't see themselves as
"socialists". However I suppose that their views are very interessting and favorable to
socialist thought. I believe that is mostly due to the general association that this term has
with the soviet union/government control.
Kantian Ethics and Socialism - Harry van der Linden
Harry van der Linden wrote a revival of a lost german socialist tradition. The kantian
socialism of the marburg school. It argues that workers are used as a mere means
systematically due to the structure of capitalism. In that tradition he is focusing on Hermann
Cohen. Cohen is one or maybe the leading head of that tradition. Cohen believed that our
our ethical views must be the guiding principle of how our economic system must be
organised for the realm of ends. Showing that workers are used as a *mere* means is the
sticking point (as a critique of capitalism). I believe it can be shown when one analyses the
autonomy of both parties in that relationship. Either way the positive argument for socialism
is indirectly independent from the critique of capitalism.
The Idea of Socialism: Towards a Renewal - Axel Honneth
Axel Honneth gives a new light to utopian/romantic socialism. He gives re-inerpretation/re-
construction of buried ideas of that tradition that largely come from german and french
romanticism. He has a blind spot to the ricardian socialists and the contractual arguments,
but other than that he gives a good historical account of socialist thought. With his idea of
social freedom Honneth argues that for socialists it isn't enough to argue against alienated
labour and the heteronomy at the workplace. He believes that they have to go further than
the economic sphere including personal life. With is reconstruction he attempts to fix a
mistake that (in his view) occured at the birth of socialist thought.
Why not Socialism - G. A. Cohen
Gerald Allan Cohen makes a preliminary case for socilism in his short book Why not
Socialism. (It was posthumously published so it isn't written in the style that Cohen has
when he feelt like that his writing was finished.) I puts our social relationships under
scrutiny. He introduces two features of his socialism. An Equality of Opportunity Principle
and the Principle of Community. The first relates to the idea that what the market registers
and responds to is potentially inimical to justice. (It responds, for example, to people’s
ability to pay, and this ability is often influenced by factors for which it would be unjust to
hold people liable.) Regarding the influence of (option-)luck in life. The other principle
relates to a certain sense of community/solidarity/fraternity.
The book asks us what kind of relationships we would like to have (regarding these
principles) by introducing these ideas with a camping trip. I only suggest to read this writing
with secondary literature because it really reads as unfinished. I recommend Serena
Olsarettis article "Rescuing justice and equality from Libertarianism" that makes these two
principles compatible. And Alfred Archer's article ""Community, Pluralism and Individualistic
Pursuits" which responds to common misconceptions of Cohen's views. I would read
atleast read these two after reading the book. (Nicholas Vrousalis would be another very
good suggestion here.) I would suggest more but I alread wrote too much regarding this.
This is just a follow up comment to my previous comment that I already wrote. There is
ofcourse much more to write about each book. I just thought I would mention what I think is
the most important aspect of each book. None of these books are about the economic
organisation of socialism. These books relate to normative theory. If one wishes such a list
I am happy to write another list.
I also recommend you to read the IEP article on socialism
(https://www.iep.utm.edu/socialis/) and the stanford encyclopedia entry on socialism
(https://plato.stanford.edu/entries/socialism/) if you are new to these ideas.

S-ar putea să vă placă și