TA

Thomas Aitken

32 karmaJoined Aug 2022

Comments
1

Hi Michael,

Thanks for reading the whole thing, for your kind words and for your considered  criticism.

First, your doubt about my idea of a 'process'-based approach to ethics. My discussion of the idea of static vs dynamic ethics in the essay is very abstract, so I understand your desire to understand this at a more concrete level.

In basic terms, the distinction is just between thinking about specific interventions vs thinking about policies. That's why I said the static/dynamic distinction mapped to the distinction between expected utility maximisation and the  Kelly criterion. One considers how to do best in one-off actions -> maximise payoff, the other doing multiple actions embedded in time -> maximise growth rate of payoff. When it comes to ethics, I think everyone is capable of both ways of thinking, and everyone practises both ways of thinking in different contexts.

When it comes to traditional ethical theories, I would say Act Consequentialism is the most static. Virtue ethics, Confucian role ethics, and Deontology are all more on the dynamic side (since they offer policies). But this is just a rough statement. And I also don't mean to imply by that that Act Consequentialism is the worst ethical theory.

The main worry from my point of view is when the static approach dominates one's method of analysis. One way in which this manifests (albeit arguably of little relevance to EA) is in Utopian political projects. People reason that it would be good if we reached some particular state of affairs but don't reason well about the effects of their interventions in pursuit of such a goal. In part, the issue here is thinking of the goal as a "state", rather than a "process". A society is a very complex, self-organising process, so large interventions need to be understood in process-theoretic terms.

But it's not just Utopian thinking. I believe that often technocratic thinking, as practised by EA, can fall into similar traps. I'm not an expert on this stuff myself so I have no idea what he is right or wrong about, but people in the community probably know that Angus Deaton has made criticisms of some EA-endorsed interventions from exactly this kind of perspective (his claim being that the interventions are too naive because they don't understand the system they're intervening in).

Along somewhat different lines, I also made the point in the essay that certain formal questions in utilitarian ethics only seem vital from a static-first perspective. MacAskill spills a lot of ink on population ethics in What We Owe the Future because he sees it as actually having (some) real-world relevance in terms of how we should think about existential risk. On MacAskill's perspective, it matters because if we can use Population Ethics to prove that you should want to statically maximise the number of beings, and the number that will exist in the far future mostly just depends on whether all of humanity goes extinct or not, then we should care way more about really existential risks (like AI) vs maybe not really existential risks (like climate change). I don't agree with caring way more about AI than climate change, though in large part I think that's because of different empirical beliefs about the relative risks of those. But that's not even the point. The point is just that there is an alternative worldview where Population Ethics need never come up. My highest-level ethical perspective is not precise but something like "Maximise the growth in complexity of civilisation without ruining the biosphere". My views about existential risk follow from that. (Which are, by the way, that it's the worst possible thing to happen, so in that sense I totally agree with MacAskill, but I get the bonus that I don't have to lay awake at night worrying about Derek Parfit.)

Ok, now for your other critique/question, which is basically how do we take on board a critique of optimisation without losing what's good and useful and effective about EA? I think I agree with the premise of your last question, which is that EA has done some really good stuff and that it's been based on formal methods that I've critiqued.

I guess there's different levels of response I could have to this. Maybe the essay doesn't always read like this, but I would say my main goal was to describe the limitations of expected utility reasoning and optimisation-centric perspectives, but not to rule them out completely. What I would say is that Effective Altruism is not the only possible approach to doing good in the world, and I do think it's very important to understand this. To me, the right way of thinking about this is in an ecological way. I think different ways of doing good have different ecosystem functions. I think adding Effective Altruism into the mix has probably made the world of philanthropy a lot more effective and good, as you suggest, but philanthropy shouldn't be the main way we make the world better in any case. Taking this to an extreme to illustrate a point: I think it would be far better if every nation in the world had good, solid, democratic governments than if every person in the world was an Effective Altruist but every nation was ruled by tyrants.

Ultimately, I don't know what Effective Altruism should jettison or what it should keep. That wasn't really the point of my essay, and I have no good answers... Except maybe to say that, in its intellectual methodologies, I'm sure there's some things it could learn from the fields I discuss in the essay. Maybe the main thing is a good dose of humility.