Hide table of contents

My submission is linked for the "EA Criticism and Red-Teaming Contest". Thanks for reading if you have time.

Edit started at 23:07 UTC on Thursday 1 September

N.B.: my in-text hyperlinks (i.e. the ones where the text is clickable) are not showing up on the Google Drive, but if you hover over the text as you're reading, your cursor should indicate where there is a link. It should hopefully be obvious from context where I have included such a link.

In response to the pleasant feedback in the comment section, I am now including a  summary of the linked essay. The summary is formatted in terms of roughly the same sections as the essay.

Summary

1 Introduction

 I argue that there is an intellectual core to Effective Altruism that I call "Economism", meaning the over-generalisation of the toolset and ideas from Neoclassical economics. I associate this with a broader way of thinking that I label the "Formal/Comprehensive Mentality". This mentality is defined by its attraction to simple formalisms and totalising frameworks. I contrast it with other mentalities that are more "integrative"---more interested in systems and emergent features---which I claim are associated with  intellectual tendencies that take a very different approach than economics, such as complexity science and ecology.

2 Economism in EA: An Overview

This section focusses on creating the connection between tendencies in economics and in Effective Altruism, with little in the way of explicit criticism. I identify four areas in which the intellectual methodology of EA can be considered "economistic":

  1. the use of Expected Utility Theory as a universal decision criterion;
  2. methodological individualism and reductionism;
  3. the valorisation of “optimisation”;
  4. a belief in the power of non-empirical formal models to represent complex systems.

I consider 1. to be a core and obvious connection.

For 2., I show that economics is known for this approach to scientific methodology especially in comparison with other social sciences. I argue that EA also exhibits an individualist approach to ethical calculations in general, focussing on individual impact rather than collective responsibility, a distinguishing feature from other activist or political movements. I argue that methodological individualism is also an inherent feature of the technical apparatus of Act Consequentialism (e.g. in the study of Population Ethics), insofar as it is concerned with static aggregation of individual units. (As opposed to other approaches to ethics, which might consider these problems relatively unimportant because of a less reductionist approach.)

I deal with 3. only in brief. I connect it to expected utility maximisation (the core idea of Expected Utility Theory), and suggest it shows itself most clearly within EA in the arguments for runaway AI.

For 4., I first argue that this is a reasonable characterisation of neoclassical economics. I then argue that the clearest example of this is again the arguments for runaway AI.

3 The Flaws of Economism under Complexity

This is the main body of the essay, the section where I make my critique. There are two top level sections which each break into several sub-sections (with sub-sub-sections below).

3.1 The Breakdown of Expected Utility Theory

In the first part of this, I make an extended technical detour into the idea of "ergodicity" and how it illustrates that expected utility maximisation is not a universal algorithm for rationality.

I argue, following Ole Peters and Murray Gell-Man, that the stochastic processes we face in the everyday world, such as financial dynamics, are "non-ergodic", meaning path-dependent. An important dual-characteristic of real-world processes that guarantees this is the existence of "absorbing barriers" (as Nassim Taleb calls them) paired with multiplicative dynamics. Expected Utility Theory cannot reckon with this as it is a timeless theory of one-off maximisation of an ensemble-average of states, i.e. ergodic.

I observe that John Kelly, with his Kelly Criterion, already identified the correct policy to deal with making decisions over time. Peters and Gell-Mann, though, argue that we can turn this into a different paradigm for decision theory. One can consider this a "dynamic" decision theory to replace the "static" decision theory that is EUT.

I argue that the failure to reckon with these facts is associated with a number of flaws within the main intellectual approach of EA:

  1. A formal approach to ethics that focusses on static aggregation of utility
    rather than consideration of dynamic processes.
  2. Disrespect for the idea of evolutionary knowledge and “ecological rationality”.
  3. Disrespect for sustainability/anti-fragility in favour of optimisation.
  4. The weighting of highly improbable upside outcomes too highly.
  5. Unrealistic arguments for the threat of runaway AI.
     

For 1., I argue that the fundamental paradigm of Act Consequentialism, based on the framework of EUT, is a kind of "static ethics", meaning concerned with one-off maximisation of states. I oppose this with the concept of "dynamic ethics", meaning an approach to ethics that focusses on processes or policies. I make the claim: conventional Decision Theory : Peters/Gell-Mann Decision Theory :: static ethics : dynamic ethics. I claim that it is possible to find dynamic ideas within EA, such as the popular view that, as a civilisation, we should follow the policy of maximising economic growth. But I suggest that this is a simplistic policy that is misaligned with the long-term interests of the planet, and see this as a clear example of economism in action. 

For 2., the headline idea is that a focus on individual optimisation blinds us to the fact that most knowledge is not the result of conscious processing, but embedded within the wiring of our brain and the structure of our society. 
    The "heuristics and biases" research program of Kahneman and Tversky is popular within EA circles, and sees humans as irrational in comparison to the model of Homo economicus. But the Ecological Rationality theory of Gerd Gigerenzer gives us an alternative perspective, one which sees many of the so-called flaws of human thinking as adaptations to the conditions imposed by having to make decisions in the (complex, evolving) real world. I point out that there is a natural consilience between this and Peters and Gell-Mann's dynamic decision theory.
    Advocates for the idea of cultural evolution, such as the evolutionary anthropologist Joseph Henrich, have argued that many of our cultural practices and institutions are adapted to their ecological conditions. I connect this to some of the standard arguments in the Conservative tradition of political philosophy, in particular, the idea that our institutions and norms often have a vital logic that we do not understand. The moral of this is that, when calculating payoffs, we should always ask the question: How might this intervention affect the system that informed my calculation of payoffs to begin with?

For 3., I refer to Taleb's popular book Antifragility: Things that Gain from Disorder, in which one of his main ideas is that optimisation = fragility. I explain the logic of this idea by analogy with the idea of "overfitting" in Machine Learning. I point to real-world examples of where, as a society, we have become unstuck by this---for example, in the over-optimisation of global supply chains. I then make the connection to the environmental crisis that the world is experiencing. I suggest that a belief in the idea of optimisation tends to push one towards the idea that we can innovate our way out of this crisis. I quote a passage from What We Owe the Future which indicates that Will MacAskill holds such a view. I make clear my own feeling of doubt and fear about this, quoting a relevant statement from Geoffrey West: “We’re not only on a treadmill that’s going faster,
but we have to change the treadmill faster and faster” (The Surprising Math of Cities and Corporations).

For 4., I bring up another paper by Ole Peters, the Time Resolution of the St Petersburg Paradox. This holds that we can deflate the St Petersburg Paradox by realising that, if we imagine playing repeated St Petersburg gambles over time, the correct policy would be the Kelly Policy (which does not blow up to infinity). I then connect this to the ideology of "Long-termism" advocated by Nick Bostrom and MacAskill. I grant the idea that we should care about long-term stuff but not "ultra" long-term things, such as the possible lives of digital beings thousands or millions of years into the future. I suggest that the non-ergodic perspective lends support to this, because it shows us that there is an asymmetry between downside and upside: we should care a lot more about annihilation than about a tiny possibility of a great outcome.

For 5., I argue that both Nick Bostrom and Stuart Russell's arguments for the risk of runaway AI treat the world as a game, and the hypothetical AI agent as Homo economicus. I first observe their apparent belief that it's reasonable to expect a highly intelligent AI system to be a scientific and strategic genius, even though its goal system is really "dumb" compared to our complex and dynamic goal systems as humans. I tentatively suggest at a possible contradiction here. I then bring up the scenario of an AI system that has a simple goal system and then "recursively self-improves" its intelligence while maintaining that goal system. I argue that this is implausible for 3 reasons, which I will simplify into 2 for this summary: 
   (1.) The possibility of recursive self-improvement sneaking up on us is highly questionable, given that the necessary starting point for such a process is an agent that has a conception of self and knows how the world works to a large extent. 
    (2.) The way in which we learn, as humans, to thrive in multiple, complex environments, is by having a complex goal system that pushes us to explore multiple, complex environments as we are learning, so why wouldn't this be our default assumption for an AGI?
I argue a core belief behind Bostrom and Russell's arguments is the idea that the world is little more than a game that an agent smart enough can control. But I observe that no agent could control the world entirely, and suggest that it is therefore rational to expect a highly intelligent agent, artificial or not, to want to explore as well as exploit.

3.2 False Precision and Aggregation

This section is shorter and breaks into two main sub-sections.

3.2.1 Bayesianism and its Scope

In this section, I define Bayesianism as "an ideology that advocates the rationality of trying to assign subjective probabilities to beliefs about ordinary things". I argue that this ideology has a number of flaws.

There is a typo where I say I have identified "four flaws" but there are in fact 5:

  1. The Feasability/Authenticity Problem
  2. The Introspection/Realism Problem
  3. The Uncertainty Problem
  4. The False Precision Problem
  5. The Neglect of Imprecision Problem

For 1, I argue that, under one interpretation, Bayesianism requires superhuman cognitive prowess. Using Kahenmann's System 1/System 2 distinction, a 'true' Bayesian needs to have System 2 switched on all the time. But this is impossible.

For 2, I start by casting doubt on one of the foundational ideas of cognitive science, Jerry Fodor's conception of ‘Folk Psychology’, which posits that the human mind is a web of “beliefs”. I suggest that this is a language-centric view of things and offer an alternative view, pithily summarised by Yann LeCunn in a tweet: "Language is an imperfect, incomplete, and low-bandwidth serialization protocol for the internal
data structures we call thoughts." Instead of being driven by "beliefs", I posit that thought is driven by "world models". Like most mathematical models in science, these world models may possibly predict in a format we could compare to a probability distribution but not single-point probabilities. I propose this as an explanation for why it is so un-intuitive to assign subjective probabilities to one-off events: because it is not thought-native

For 3, I refer to the idea of "fundamental uncertainty", as advocated by John Maynard Keynes, that many problems of prediction are so complex (or ill-defined) that it is simply inappropriate to apply probability theory to them. I suggest that this reluctance to assign probabilities to beliefs about specific events is the default view of people in part because they know that they don't have confidence in their beliefs. I grant that there is an extra accountability that comes with the attitude of assigning probabilities and especially betting on these probabilities. But I point out that an equally accountable alternative is just owning up to one's uncertainty.

For 4, I make reference to the prediction aggregation site Metaculus. I point out that the methodology of predicting in terms of single-point probabilities gives inherently favourable-looking results compared to, say, the way in which we test scientific models that, by modelling an entire system, imply an entire probability distribution. I point to the fact that the Metaculus community has a decent-looking Brier score but also has gotten quite a few thing completely wrong.

For 5, I discuss the idea that the practice of trying to make precise predictions inherently pushes one into the realm of problems where that practice isn't inherently doomed. I refer to Phillip Tetlock's implicit acknowledgement of this idea in Superforecasting. One important class of phenomena that super-forecasters knowingly can't predict is Black Swan Events, phase changes in chaotic systems that can't be timed like financial cascades, earthquakes or (arguably) pandemics. I observe that while Black Swans can't be timed, there are typically structural indicators of hazard that people can observe and warn about. Since these warnings will not correspond to precise quantitative predictions, the Bayesian mindset may be inclined to dismiss them. From here, I make the broader point that there is plenty of knowledge and information in non-quantitative fields in general, it's just a little 'chewier'.
    I finish by pointing out that a focus on single-point probabilities also tends to cause a neglect of processes that have spectral outcomes. I argue that Toby Ord's focus in The Precipice of the single-point probability of existential risk causes him to believe that it is more urgent to address AI risk than climate change, even though we know that every possible climate change outcome in the spectrum is awful, even though many AI outcomes are possibly quite good, even on his account.

3.2.1 Belief Aggregation and the Efficient Markets Hypothesis

This is a much shorter section than the previous. A common defence of prediction aggregation systems like Metaculus, or prediction markets---both of which are popular concepts in EA---refers to the idea of market efficiency. I use this section simply to suggest that this defence is a little weaker than it at first appears.

In the first part of this, I interrogate the meaning of the Efficient Markets idea. I note that an efficient market is usually defined as one where security prices “reflect all available information”, which is sometimes held to carry the idea that security prices reflect "reality", in some sense. But I note that the tests developed for empirically validating market efficiency test for something that is arguably a little different: whether traders can non-randomly beat the market. I claim that security prices do rapidly gobble up all/almost all of the "information" that traders have, but that this information is often just information about belief. Even market bubbles, then, can be efficient, in the sense that nobody can really work out when the reservoir of belief will run out. If everyone in the market was a rational agent, a la Homo economicus, this wouldn't be possible, because the Nash equilibrium is not to invest in the bubble stock, if there is common knowledge among investors that the stock could plausibly lose all revenue. But we are not, and therefore we shouldn't necessarily think that security prices are directly hooking onto facts about the world outside the market (or outside the realm of belief itself).

I don't make too much of this point, only suggest that it casts doubt on "the idea that lots of people trying to make guesses at a single value will necessarily make that value more aligned with the facts of the world".

Conclusion

I connect all of the above to the idea of a "mentality". I finally define the concept of the "Formal/Comprehensive Mentality". I say the "Formal" part is self-explanatory. The “Comprehensive” part  "captures the idea of a mindset that believes intellectual topics can be completely wrangled—isolated and conquered—with one master approach or methodology."
    I suggest there is at least one clear benefit to this mentality, which is that the reification of simple models is very helpful for motivating people to action. This is at the heart of EA as a movement. It can be seen in the way in which MacAskill and Ord took Singer's thought in "Famine, Affluence and Morality" and ran with it. But another core idea of EA is that knowledge should precede action, and I argue that some of the intellectual methodologies of the movement cannot service that goal.
   I note that my own way of thinking is different from this, which leads me away from the ideas of economism and towards ideas in e.g. complexity science, ecology and cultural evolution. I speculate that the identity of EA as a movement may inherently correlate with certain mentalities but claim that there are things that EA could learn from alternate ways of thinking, if it could find a way to integrate them.

Comments3
Sorted by Click to highlight new comments since:

I found reading this valuable. I’ve long-thought there seems to be a way of thinking that is widely shared among effective altruists (even if it doesn’t need to be — the project is not committed to a particular view). But it’s difficult to pinpoint exactly what this is, and I think you’ve done an excellent job of that here.

If you get the time, I think it’d be valuable to produce an executive summary of this (even if it’s just in dot-points) as I suspect it’ll get a much wider reach that way than through this link post (and I think it deserves this reach).

Alternatively, there are quite a few threads in this essay, it could be worthwhile publishing it as a sequence, going through one chapter at a time.

Here are a few initial comments/questions; I might add more if I get time.

  • I agree with your characterisation of EA as tending to be “methodologically individualist” — though I don’t quite follow your ‘process’ focussed alternative. Can you offer a real-world example of where the two different methodologies might conflict?

  • I think the idea that some charities (and interventions more broadly) can do potentially orders of magnitude as much good as others, is pretty core to the epistemic foundations of EA. It’s a fact that motivates optimising, and I think recognising this and taking that seriously has been one of the reasons EA has to-date been so successful (e.g., GiveWell has massively improved how well-funded these best charities are).

If I were to accept that a lot of your criticism about this optimisation-mindset is correct, how could I avoid throwing the baby out with the bath water?

Perhaps another way to frame this: it seems to me there are many cases where the types of reasoning you’ve criticised (formal, precise, quantified, maximising) are the very things that led EA to be quite successful to-date, as they seem to be tremendously effective in many domains (even complex ones!). Do you agree with the premise of my question? If so, how do you tell when it is appropriate to avoid this style of reasoning (or perhaps, how do you tell which parts of this reasoning to jettison?).

Hi Michael,

Thanks for reading the whole thing, for your kind words and for your considered  criticism.

First, your doubt about my idea of a 'process'-based approach to ethics. My discussion of the idea of static vs dynamic ethics in the essay is very abstract, so I understand your desire to understand this at a more concrete level.

In basic terms, the distinction is just between thinking about specific interventions vs thinking about policies. That's why I said the static/dynamic distinction mapped to the distinction between expected utility maximisation and the  Kelly criterion. One considers how to do best in one-off actions -> maximise payoff, the other doing multiple actions embedded in time -> maximise growth rate of payoff. When it comes to ethics, I think everyone is capable of both ways of thinking, and everyone practises both ways of thinking in different contexts.

When it comes to traditional ethical theories, I would say Act Consequentialism is the most static. Virtue ethics, Confucian role ethics, and Deontology are all more on the dynamic side (since they offer policies). But this is just a rough statement. And I also don't mean to imply by that that Act Consequentialism is the worst ethical theory.

The main worry from my point of view is when the static approach dominates one's method of analysis. One way in which this manifests (albeit arguably of little relevance to EA) is in Utopian political projects. People reason that it would be good if we reached some particular state of affairs but don't reason well about the effects of their interventions in pursuit of such a goal. In part, the issue here is thinking of the goal as a "state", rather than a "process". A society is a very complex, self-organising process, so large interventions need to be understood in process-theoretic terms.

But it's not just Utopian thinking. I believe that often technocratic thinking, as practised by EA, can fall into similar traps. I'm not an expert on this stuff myself so I have no idea what he is right or wrong about, but people in the community probably know that Angus Deaton has made criticisms of some EA-endorsed interventions from exactly this kind of perspective (his claim being that the interventions are too naive because they don't understand the system they're intervening in).

Along somewhat different lines, I also made the point in the essay that certain formal questions in utilitarian ethics only seem vital from a static-first perspective. MacAskill spills a lot of ink on population ethics in What We Owe the Future because he sees it as actually having (some) real-world relevance in terms of how we should think about existential risk. On MacAskill's perspective, it matters because if we can use Population Ethics to prove that you should want to statically maximise the number of beings, and the number that will exist in the far future mostly just depends on whether all of humanity goes extinct or not, then we should care way more about really existential risks (like AI) vs maybe not really existential risks (like climate change). I don't agree with caring way more about AI than climate change, though in large part I think that's because of different empirical beliefs about the relative risks of those. But that's not even the point. The point is just that there is an alternative worldview where Population Ethics need never come up. My highest-level ethical perspective is not precise but something like "Maximise the growth in complexity of civilisation without ruining the biosphere". My views about existential risk follow from that. (Which are, by the way, that it's the worst possible thing to happen, so in that sense I totally agree with MacAskill, but I get the bonus that I don't have to lay awake at night worrying about Derek Parfit.)

Ok, now for your other critique/question, which is basically how do we take on board a critique of optimisation without losing what's good and useful and effective about EA? I think I agree with the premise of your last question, which is that EA has done some really good stuff and that it's been based on formal methods that I've critiqued.

I guess there's different levels of response I could have to this. Maybe the essay doesn't always read like this, but I would say my main goal was to describe the limitations of expected utility reasoning and optimisation-centric perspectives, but not to rule them out completely. What I would say is that Effective Altruism is not the only possible approach to doing good in the world, and I do think it's very important to understand this. To me, the right way of thinking about this is in an ecological way. I think different ways of doing good have different ecosystem functions. I think adding Effective Altruism into the mix has probably made the world of philanthropy a lot more effective and good, as you suggest, but philanthropy shouldn't be the main way we make the world better in any case. Taking this to an extreme to illustrate a point: I think it would be far better if every nation in the world had good, solid, democratic governments than if every person in the world was an Effective Altruist but every nation was ruled by tyrants.

Ultimately, I don't know what Effective Altruism should jettison or what it should keep. That wasn't really the point of my essay, and I have no good answers... Except maybe to say that, in its intellectual methodologies, I'm sure there's some things it could learn from the fields I discuss in the essay. Maybe the main thing is a good dose of humility.

I found this to be a comprehensive critique of some of the EA community's theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend's suggestions, especially adding a TLDR to this post.

Curated and popular this week
Relevant opportunities