“Economics can be harder than rocket science: the Soviet Union was great at rocket science”

EAG is a brilliant conference, and like many other examples throughout history it shows the value of gathering a large amount of smart, driven people together from many fields together. You’re never quite sure what the chaos will produce, but it will always be interesting.

For me, one of the more fascinating talks at the conference happened off the agenda, when a group gathered together to discuss complexity science, and its implications for EA. 

There was far too much discussed there for a detailed breakdown, and many are already incorporating complexity science and theories into their work, so this post may be of only partial value to many. However, the implications of complexity interacting with long termism in particular are serious, and if we are working on systems as complicated as humanity as a whole we ignore them at our peril.

Complexity science covers many forms of analysis, broadly tied to a central thesis: "The whole is the greater than the sum of its parts". It applies to animals, mathematics, the movement of planets, the functioning of politics, and any other system where the sum of seemingly linear systems and logical correlations suddenly produce emergent behaviour, often in fascinating and counterintuitive ways.

However, I am an economist, and while I have no great insight into chaos theory mathematics we do think about these issues of complexity in our own way, which are sometimes very different to other fields.

There was a classic challenge to economics, to name something true and non-trivial that the discipline has provided. Samuelson’s answer was Ricardo’s theory of trade, which is deeply counterintuitive to many, especially at the time, and profoundly changed the world. It is also an example where letting go of control and planning, in favour of chaos, led to vast gains in human material prosperity.

The original argument is elegantly simple. The United Kingdom can produce wool, which it is well suited to, and grow wine, with low yields and poor taste due to our unique climate. Meanwhile Portugal can make excellent wine, but is unsuited to sheep varieties with heavy wool. 

From the point of view of the United Kingdom there are two ways to make wine. Firstly, you can divert labour and capital away from sheep farming at great cost and with limited results, or secondly you could load wool onto a ship, which returns a month later loaded with far more wine than you could ever produce from the same resources. It may as well be a factory for turning wool into wine for all the United Kingdom cares, and importantly this process works even if Portugal is both better at making wine and wool.

Governments today still struggle with this, attempting to micromanage the chaos of trade by adjusting tariffs and setting complicated quotas on thousands of line items. I once contracted on a project for a government in Sub Saharan Africa which was doing exactly this. The country had set an import quota for sugar, managed by their state trading company, in an attempt to improve their balance of payments situation. When this led to high prices due to shortages they had set price limits, when this led to hoarding and smuggling they took control of sugar retailing and distribution. Smuggling profits rose even higher, leading to a situation where sugar smuggling funds terrorist operations and normal people experience rationing, at prices well above world market rates. Unfortunately, the game of policy twister is still underway, with an expansion of state-owned sugar production to bridge the deficit. This is not going well. It almost certainly did not even improve their balance of payments, their original goal in some distantly remembered past.

The concept that letting go to achieve more is deeply counterintuitive, and runs against our instincts, but is a foundational aspect of core parts of classical economics.  Chaos there is not just a force to be fought, but something a force to be supported and gently guided the service of social goals. As a result, you end up in an almost state of zen, doing less to achieve more, and finding a way to let the fury of the river do the work for you rather than damming it and making a serene canal.

There is a great tendency in the world today to see a problem, then sit down and reason almost in a dark room for how you will solve it scientifically via policy and commands. This can drive great progress, and EA was founded on this to a degree. However, the failure modes of this thinking can be catastrophic, especially with radical and ambitious plans: many in the past have sat down with honest intentions for adding order to chaos for the betterment of man but ended up doing the precise opposite. These ideas seemed obvious at the time, promised to provide better lives through science, or raise agricultural output. The reality did not conform to their expectations: they were plans designed for robots, not the humans and natural world standing before them in their vast complexities.

However, while chaos cannot and should not be contained completely, and while we should not presume to understand it fully, we also simply cannot leave it alone. This is true in economics and its failures of the past and present; it is utterly foundational to the concerns around existential risks. Plans must be made for when the market fails: your social supports, antitrust laws, policy on externalities and systems to prevent financial crashes and bank runs must be in place. These economic policies were almost all built upon the pain of past experiences, and scientific thinking, evidence and rationality were a key element of their design. For existential risks this raises an even larger problem, as we cannot work from past experiences: it will simply be too late, and grand plans here are required.

So where does this leave us? Are we stuck between chaos being uncontrollable and often useful up until the point where it levels our world? 

I would argue not, while I’m perhaps stretching the zen analogy here I believe it is a case where we need both the yin and the yang of chaos and order. 

We need to be modest in what we know, understand that the world is not a chess board and that making use of chaos can be helpful: plans should be able to adjust, we should design systems to make use knowledge no matter where it comes from, we should be humble in our own understanding and constantly attempt to question our positions. A physicist may not be able to predict the outcome of a football match, but they may be able to make predictions of what happens when the ball is kicked, and if it will leave the stadium. That can still be useful.

At the same time we need to keep working towards the large goals attached to the movement, and accept that we must impose order on chaos, sometimes in situations where we only have reason to guide us. Here we must still look at the mistakes of the past, in order to not repeat them, we may only have one opportunity to do so.

This is a fairly chaotic post, rather fittingly. I do not have all the answers or even that many, I am working on one small part of EA looking at food systems, where producing enough food is a necessary but not sufficient step in feeding everyone. It would be great to start a conversation on complexity across the movement, how we deal with it well, how we deal with it poorly, and where the example of others can help. 

If you have any thoughts please post them below, and we should not stop having grand ambitions or even ivory tower thinking.

However, at the same time I would suggest embracing the following thoughts, wherever possible:

  • Design systems that scale from a small base. If you have a grand plan that requires a complete system to function, but no way of getting there, you do not have a plan.
  • Systems must fail gracefully. We will be wrong many times, and must build that into our thinking.
  • Look at history and other studies, people have done surprisingly weird things in the past which may be relevant for you today. Want to see how a society with a judiciary, a legislature but no executive functioned? Want to know what happens when we lose a good chunk of our sunlight? Of course the world has changed, but history and its examples can prove a very interesting start for our thinking, something Will Macaskill highlighted in his recent EAG speech.
  • The world is not a chessboard. We do not know all of the rules and probably never can, we cannot assume we can move the pieces at will, and I would be deeply concerned of a system that allowed that level of control.

27

5 comments, sorted by Click to highlight new comments since: Today at 9:50 AM
New Comment

Thank you Micheal -- Great Post! I love this part:

Design systems that scale from a small base. If you have a grand plan that requires a complete system to function, but no way of getting there, you do not have a plan.

At the EAG Virtual Entrepreneurs Gatheround event, the discussion of going from 0->1 came up over and over. One trap I see in EA is the hesitancy to act under uncertainty. Yonatan Cale's recent post I’m Offering Free Coaching for Software Developers in the EA community demonstrates an approach to break out of this trap: start small, launch fast, stay open to feedback.

One perspective on the Play Pumps story is that the founder failed to pay attention to feedback: what was the experience of the first kid who used the first Play Pump? 

I'm glad you wrote this, let's explore Systems and Complexity further -- one great venue is Complexity Weekend -- can we borrow some of their tools/processes for an EA Systems and Complexity Event?

“Economics can be harder than rocket science: the Soviet Union was great at rocket science”

This is a good quote, but it seems a little unfair. The Soviet's rocket scientists were brilliant scientific thinkers, while their economic planners really were not. I don't think we have clear evidence one way or the other regarding how well central planning would work if the central planners were particularly smart people with good epistemic hygiene.

Hi Daniel,

Sorry, I only just saw your comment!

I think Lysenko and Lysenkoism is completely fascinating, but kind of proves the quote above. 

Lysenko was a biologist of sorts whose falsified, confused and just invented results on plants supported Stalinist and Marxist thinking on how people are not innate but created by environments, and then got brought into GOSPLAN to bring these insights to the economy. This is not because there was a lack of brilliant economists initially, just that those Stalin had were either cringing on his party lines, hidden in side posts for their own good, or dead

The problem was both to solve a complex problem (economics) and do it in a way that was acceptable to your masters and Marxist thinking of the time, which made the problem more complex than rocket science.

Once we move past Stalin (Red Plenty is very readable on this!) we get people like Kantorovich stepping out of the shadows. They were really smart, inventing new tools we use today and were really brilliant thinkers, but still had to solve not only the problem of the maths, but also the difficulty in understanding the people who they were supposedly commanding and their complexity and agency. On top of this, some tools and analysis are still forbidden to you.

Compare this with the rocket programme. Brilliant scientists again, solving really difficult problems, but orbital mechanics does not shift its behaviour to ruin your plan based on complicated politics (you may have missed an interaction, but they're a property of your materials and physical forces), and solving physics equations does not contradict Marxist thought (mostly, E=MC2 was banned for a period as it apparently contradicted Marx).

The point of the Soviet Union's failure, or that quote, was not that if it had a few more smart economists or thinkers they would have succeeded, or that economists are somehow better than physicists. The point was that they were trying to do something that could not be done with their or our technology: fully tame and control complexity like it was a space rocket.

The link in this comment confuses me. Lysenko was not an economist and Lysenkoism was not primarily a matter of economic planning. Rather it was state-enforced pseudoscience, which seems like a pretty different dynamic.

Also, IIRC the peak of Lysenkoism was at a time when the Soviet economy was developing quite quickly; the serious stagnation came later, after Lysenkoism had fallen from prominence. So this doesn't really seem like evidence in favour of your claim re economic planning.

Lysenkoism was used by central planners to attempt to improve Soviet agricultural output, and, unsurprisingly, exacerbated famines. This is just one example of how dumb Soviet central planners were on critical issues. I doubt the Soviet space program would have worked as well as it did if the thinking of their rocket scientists was at a similar level to that of those running their economy.