I am an EA outsider, so despite my best attempts to research previous critiques of EA I am bound to repeat some of them. Some are repeated purposefully, either to give credence to the point or to share my own take on it. Whenever I have knowingly repeated a criticism, I have tried my best to cite others’ work as well.
I first heard of EA through 80,000 hours in 2016-17, when I spent a lot of time trying to figure out how to have the most positive impact in my life. I was obsessed. Initially I wanted to become an acolyte, establish a local EA chapter and spread the knowledge. However, that feeling quickly waned, as I started realising the vast differences between the reality I came from, compared to that of people whose work I was consuming. They (read: 80,000 hours and EA contributors) were mostly coming from prestigious educational institutions in the western world - I was not even in university, and came from an entirely different cultural and socioeconomic context. I did not have access to the knowledge, finances, mechanisms of influence or the jobs that were mentioned in 80,000 hours and other EA forums. So after a couple of months of ‘infatuation’ (quite similar to how it is described here), during which I read most of what was available on the 80,000 hours website and contacted my local chapter of EA (one did exist, but no one responded), I felt disillusioned.
Since then I have had limited engagement with the EA community - I listened to a few episodes of the 80 000 podcast, read a few related posts and kept my eyes on the job board. The disillusionment led me to look for a ‘purposeful’ way of existing beyond EA’s chartered paths. However, through the pursuit of other goals and my genuine interests, I actually did end up in one of those prestigious western universities - one with an EA chapter - and later in an 'influential' job - i.e. a job that would appear on the 80,000 hours job board. I suppose this corroborates some EA ideas, but also illustrates that it is not the only way to get to positions where one can have a positive impact.
Since I do like the idea of EA as a question and I believe it could be of great value to humanity, when I recently ran into the red teaming competition, I wanted to try writing an ‘outsider’s critique’ - one that is not attacking, but truly aimed at improving the organisation (or maybe if one is more cynical - shaping it to fit my ideas).
For me, the criticisms I have of EA coalesce around two main themes - Measurability & Convertibility, and Culture & Inclusion - which form the two parts of this post, and roughly refer to the title words of this movement, Effective and Altruism, respectively. Briefly, in the first part I want to point out (as others have done) that not all important things are measurable and/or comparable, and that measurability itself can even harm our ability to see what is most important. The second part challenges the unified theory of value around which so much of EA’s outlook and activity is built, and suggests that perhaps a single unified theory of value should be replaced with a diverse collection of different values, with some interlinking aspects.
Measurability & Convertibility - What’s Wrong With Being “Effective”?
The ‘effective’ part of effective altruism alludes to the better/best use of available resources. This effectiveness requires a comparison between different altruistic ways of spending resources, which in turn rests on the premise that there is a unified theory of value (scale) and unified measurement units. I will leave the unified theory of value for the Culture & Inclusion part of this post and focus on measurability and convertibility here.
The rational way of comparing the different uses of resources is by measuring and converting to the same units. When dealing with human lives, possible human lives and the concept of ‘well-being’, both measurement and conversion are incredibly hard. Measuring well-being in a reliable way is difficult, measuring the well-being of non-humans even more so. Converting and comparing between these requires many compounding assumptions that disguise each other in a web of ‘values’ that is difficult to disentangle. With numerical values and probabilities attached to each of them, they might mislead us and make us think we are comparing between the same type of things. But upon closer inspection these assumptions resemble mortgage-backed securities in 2007 - the reasonable ones stacked with the not-so-reasonable ones and packaged all together. I find that this post illustrates the point quite well.
I can hear the comebacks already - these are the best tools we have! What do you suggest, that we do not measure at all, that we do not compare?
I suggest the opposite - make more effort to understand the measures used, how they are being generalised across cultures, beliefs and species, all the underlying assumptions they rely on, and especially how the conversion between different abstract units of well-being, happiness and others are being thrown all together. Identify important things that have not been measured, tackle imperfect measures, but even more importantly - accept that there are truly important unmeasurable things, and allow altruism for them to be perceived as important, if not necessarily ‘effective’.
I can list many professions that fall into this category of important unmeasurable things - jobs which have fundamental, yet difficult to quantify, impact - teachers, nurses, carers, artists, writers, philosophers and many more. It is hard to imagine a world without these professionals (and maybe it’s not even worth imagining). Similarly, there are causes that most of us would agree are important to pursue, but it would be hard to measure their impact in easily quantifiable units. There are many possible examples, but I will stick to one - domestic violence and abuse.
There are numerous methodological challenges in measuring the prevalence of domestic violence and abuse - victims often don’t report violence due to shame or self-protection; different cultures have very different attitudes to what constitutes ‘acceptable’ behaviour within intimate relationships; violence is a sensitive topic which it is difficult to ask about in large-scale population surveys. Domestic violence has also only been researched with any level of detail or funding during the last 40 years. This means that we don’t actually know whether domestic abuse is getting better or worse over time, or which interventions work better at a population level. We just don’t have the data yet.
This doesn’t mean that domestic violence isn’t an important or effective cause to work on. It’s even more important that we work on it, because its un-measurability has historically made it so invisible. I use this example to illustrate the point that sometimes measurable problems are already partially solved - if we have figured out how to measure them, then that indicates we are at least some way towards understanding the characteristics of the problem, and therefore towards solving them. Truly wicked problems, which may be those that are most deserving of the attention of engaged and educated individuals, are not (yet) likely to be either measurable or comparable.
Pursuing measurability as understood now also leads to a cycle of self-fulfilling prophecies. The EA community has figured out how to measure something and then it pours massive amounts of funding into it. This has been picked up by other criticisms in greater detail. New charities looking for EA funding aim to tackle challenges that are equally measurable, as they would make for better funding applications. This leads to a lot of piece-meal measures, as criticised in this article. Maybe new charities with vast resources, entirely different remits and impact measurements that are different and more complex will yield better results. But the system focuses on the most measurable ones.
In essence, I believe EA needs to embrace a more complex definition of the “effective” part of its mission, one that devotes space and attention to important unmeasurable things. Focusing on quantifiability as the main aspect of effectiveness may actually obscure some of the most fundamental problems that are begging for attention.
Culture & Inclusion - What Does it Mean to be “Altruistic”?
What does ‘well-being’ mean? It sounds like a simple question at first, but this is where the second part necessary for effective altruism comes into play - the unified theory of value. This theory is inherently tied to the definition of “altruism”. For a lot of EA’s focus areas, that unified theory of value stretches to life across all humans, future people and non-human animals.
While some forms of well-being might seem obvious and universal - such as length of life (I would argue that even this one is hard to prove as a universal), that clarity rarely extends to other forms of well-being. Human and non-human well-being are culturally and socially loaded. Therefore I believe that the claims to maximising well-being need to be more limited by the uncertainty that the complex world brings. And while we can mostly agree that future humans would prefer a world with oxygen in it, I would find it hard to claim that anyone living now has a clear idea of what well-being will mean for a human in 100 years, let alone 1000. We can pursue a thought experiment - try and imagine living the good life someone 100 or a 1000 years ago lived. Would you want that? Even living the life someone a 1000 years ago imagined as the perfect life would most likely feel unbearable to modern urban dwellers like us. Why would our progeny be more interested in our takes of the future world?
I am not trying to suggest that everything is subjective and nothing objectively positive could be done. However, we should probably aim at creating a world that allows for more possible ways of living, both now and in the future. That would hopefully allow our descendants to choose well-being for themselves.
I understand the claim that all lives matter equally, I get the drowning child thought experiment and I believe that the introduction of this idea into altruistic movements and organisations is one of the many contributions the EA movement has had to wider society. But I also believe that some of the claims are a product of EA’s culture and the somewhat privileged socio-economic position of its members. It is hard for people with immediate relations that are truly in need to agree that investing in the lives and well-being of those closest to them is equally valuable to investing in people they have never met, let alone the well-being of non-human animals. There are cultures where charitable donations are not necessarily the only or the main way one gives back to their community - people give to their immediate relations, work with the local community, tutor and educate their distant relations and friends and so on. EA should consider that, in some cases, spending time with your local community could lead to intangible positive side effects - and that these might even outweigh the impact that the money you could have made in those two hours would have had when donated to the Against Malaria Foundation. The previous sentence is purposefully conditional, because I do not know that for a fact… but neither do you.
Whilst I believe charitable giving to strangers and pursuing an ‘impactful’ career to be an admirable way of engaging with the world, I think it represents the cultural outlook of a tiny minority of the world’s people. If the EA movement wants to be ‘a movement of people that appear united by a shared commitment of using reason and evidence to do the most good we can’ as described in this post, I think it might want to allow for different takes on the issue. Allowing for that might position EA more on the side of a movement of people united by a shared commitment to using reason to do the most good, rather than one committed to a particular set of beliefs - which might look more like a big tent (Big tent refers to a group that encourages "a broad spectrum of views among its members".)
What I’m saying here is extremely well captured in one of the most interesting posts I found on the EA forum: EA for dumb people. The comments made me hopeful about what EA could be if we broadly agree that EA’s Cutlure is limiting its impact and that it can be improved.
Being effectively altruistic is a noble goal, but it could be a shortsighted one, if the assumptions on which its title rests are not unpacked and disentangled.
I am humble in my expectations for this post - maybe everything I mention is well-known and debunked and I just did not find the right resources. If that is the case I will happily read those and take some extra knowledge from this experience.
This post benefited from my partner’s input and editing. All silly ideas remain mine alone.