Hide table of contents

Introduction

In a widely-cited article on the EA forum, Helen Toner argues that effective altruism is a question, not an ideology. Here is her core argument:

What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist?
I don’t think that any of these questions make sense.
It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement.
But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.)
Effective Altruism isn’t like this. Effective Altruism is asking a question, something like:
“How can I do the most good, with the resources available to me?”

In this essay I will argue that his view of effective altruism being a question and not an ideology is incorrect. In particular, I will argue that effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence. After first explaining what I mean by ideology, I proceed to discuss the ways in which effective altruists typically express their ideology, including by privileging certain questions over others, applying particular theoretical frameworks to answer these questions, and privileging particular answers and viewpoints over others. I should emphasise at the outset that my purpose in this article is not to disparage effective altruism, but to try to strengthen the movement by helping EAs to better understand the intellectual actual intellectual underpinnings of the movement.

What is an ideology?

The first point I want to explain is what I mean when I talk about an ‘ideology’. Basically, an ideology is a constellation of beliefs and perspectives that shape the way adherents of that ideology view the world. To flesh this out a bit, I will present two examples of ideologies: feminism and libertarianism. Obviously these will be simplified since there is considerable heterogeneity within any ideology, and there are always disputes about who counts as a ‘true’ adherent of any ideology. Nevertheless, I think these quick sketches are broadly accurate and helpful for illustrating what I am talking about when I use the word ‘ideology’.

First consider feminism. Feminists typically begin with the premise that the social world is structured in such a manner that men as a group systematically oppress women as a group. There is a richly structured theory about how this works and how this interacts with different social institutions, including the family, the economy, the justice system, education, health care, and so on. In investigating any area, feminists typically focus on gendered power structures and how they shape social outcomes. When something happens, feminists ask ‘what affect does this have on the status and place of women in society?’ Given these perspectives, feminists typically are uninterested in and highly sceptical of any accounts of social differences between men and women based on biological differences, or attempts to rationalist differences on the basis of social stability or cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of feminism.

Second consider libertarianism. Libertarians typically begin with the idea that individuals are fundamentally free and equal, but that governments throughout the world systematically step beyond their legitimate role of protecting individual freedoms by restricting those freedoms and violating individual rights. In analysing any situation, libertarians focus on how the actions of governments limit the free choices of individuals. Libertarians have extensive accounts as to how this occurs through taxation, government welfare programs, monetary and fiscal policy, the criminal justice system, state-sponsored education, the military industrial complex, and so on. When something happens, libertarians ask ‘what affect does this have on individual rights and freedoms?’ Given these perspectives, libertarians typically are uninterested in and highly sceptical of any attempts to justify state intervention on the basis of increases in efficiency, increasing equality, or improving social cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of libertarianism.

Given the foregoing, here I summarise some of the key aspects of an ideology:

  1. Some questions are privileged over others.
  2. There are particular theoretical frameworks for answering questions and analysing situations.
  3. As a result of 1 and 2, certain viewpoints and answers to questions are privileged, while others are neglected as being uninteresting or implausible.

With this framework in mind of what an ideology is, I now want to apply this to the case of effective altruism. In doing so, I will consider each of these three aspects of an ideology in turn, and see how they relate to effective altruism.

Some questions are privileged over others

Effective altruism, according to Toner (and many others), asks a question something like ‘How can I do the most good, with the resources available to me?’. I agree that EA does indeed ask this question. However it doesn’t follow that EA isn’t an ideology, since as we have just seen, ideologies privilege some questions over others. In this case we can ask – what other similar questions could effective altruism ask? Here are a few that come to mind:

  • What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
  • What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
  • What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
  • How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?

I’ve written each with a different ethical theory in mind. In order these are: deontology, virtue ethics, Marxist/postcolonial/other critical theories, and contractarian ethics. While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.

Some EAs may be tempted to respond that all my examples are just different ways, or more specific ways, of asking the EA question ‘how can we do the most good’, but I think this is simply wrong. The EA question is the sort of question that a utilitarian would ask, and presupposes certain assumptions that are not shared by other ethical perspectives. These assumptions include things like: there is (in principle) some way of comparing the value of different causes, that it is of central importance to consider maximising the positive consequences of our actions, and that historical connections between us and those we might try to help are not of critical moral relevance in determining how to act. EAs asking this question need not necessarily explicitly believe all these assumptions, but I argue that in asking the EA question instead of other questions they could ask, they are implicitly relying upon tacit acceptance of these assumptions. To assert that these are beliefs shared by all other ideological frameworks is to simply ignore the differences between different ethical theories and the worldviews associated with them.

Particular theoretical frameworks are applied

In addition to the questions they ask, effective altruists tend to have a very particular approach to answering these questions. In particular, they tend to rely almost exclusively on experimental evidence, mathematical modelling, or highly abstract philosophical arguments. Other theoretical frameworks are generally not taken very seriously or simply ignored. Theoretical approaches that EAs tend to ignore include:

  • Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
  • Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
  • Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
  • Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
  • Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.

If readers disagree with my analysis, I would invite them to investigate the work published on EA websites, particularly research organisations like the Future of Humanity Institute and the Global Priorities Institute (among many others), and see what sorts of methodologies they utilise. Regression analysis and historical case studies are relatively rare, and the other three techniques I mention are virtually unheard of. This represents a very particular set of methodological choices about how to best go about answering the core EA question of how to do the most good.

Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing, despite the fact that relatively little time is spent in the community discussing these issues. GiveWell does have a short discussion of their principles for assessing evidence, and there is a short section in the appendix of the GPI research agenda about harnessing and combining evidence, but overall the amount of time spent discussing these issues in the EA community is very small. I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions, and not an extensive analysis of the pros and cons of different techniques.

Certain viewpoints and answers are privileged

Ostensibly, effective altruism seeks to answer the question ‘how to do the most good’ in a rigorous but open-minded way, without making ruling out any possibilities at the outset or making assumptions about what is effective without proper investigation. It seems to me, however, that this is simply not an accurate description of how the movement actually investigates causes. In practise, the movement seems heavily focused on the development and impacts of emerging technologies. Though not so pertinent in the case of global poverty, this is somewhat applicable in the case of animal welfare, given the increasing focus on the development of in vitro meat and plant-based meat substitutes. This technological focus is most evident in the focus on far future causes, since all of the main far future cause areas focused on by 80,000 hours and other key organisations (nuclear weapons, artificial intelligence, biosecurity, and nanotechnology) relate to new and emerging technologies. EA discussions also commonly feature discussion and speculation about the effects that anti-aging treatments, artificial intelligence, space travel, nanotechnology, and other speculative technologies are likely to have on human society in the long term future.

By itself the fact that EAs are highly focused on new technologies doesn’t prove that they privilege certain viewpoints and answers over others – maybe a wide range of potential cause areas have been considered, and many of the most promising causes just happen to relate to emerging technologies. However, from my perspective this does not appear to be the case. As evidence for this view, I will present as an illustration the common EA argument for focusing on AI safety, and then show that much the same argument could also be used to justify work on several other cause areas that have attracted essentially no attention from the EA community.

We can summarise the EA case for working on AI safety as follows, based on articles such as those from 80,000 hours and CEA (note this is an argument sketch and not a fully-fledged syllogism):

  • Most AI experts believe that AI with superhuman intelligence is certainly possible, and has nontrivial probability of arriving within the next few decades.
  • Many experts who have considered the problem have advanced plausible arguments for thinking that superhuman AI has the potential for highly negative outcomes (potentially even human extinction), but there are current actions we can take to reduce these risks.
  • Work on reducing the risks associated with superhuman AI is highly neglected.
  • Therefore, the expected impact of working on reducing AI risks is very high.

The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas. I shall present three: overthrowing global capitalism, philosophy of religion, and resource depletion.

Overthrowing global capitalism

  • Many experts on politics and sociology believe that the institutions of global capitalism are responsible for extremely large amounts of suffering, oppression, and exploitation throughout the world.
  • Although there is much work criticising capitalism, work on devising and implementing practical alternatives to global capitalism is highly neglected.
  • Therefore, the expected impact of working on devising and implementing alternatives to global capitalism is very high.

Philosophy of religion

  • A sizeable minority of philosophers believe in the existence of God, and there are at least some very intelligent and educated philosophers are adherents of a wide range of different religions.
  • According to many religions, humans who do not adopt the correct beliefs and/or practices will be destined to an eternity (or at least a very long period) of suffering in this life or the next.
  • Although religious institutions have extensive resources, the amount of time and money dedicated to systematically analysing the evidence and arguments for and against different religious traditions is extremely small.
  • Therefore, the expected impact of working on investigating the evidence and arguments for the various religious is very high.

Resource depletion

  • Many scientists have expressed serious concern about the likely disastrous effects of population growth, ecological degradation, and resource depletion on the wellbeing of future generations and even the sustainability of human civilization as a whole.
  • Very little work has been conducted to determine how best to respond to resource depletion or degradation of the ecosystem so as to ensure that Earth remains inhabitable and human civilization is sustainable over the very long term.
  • Therefore, the expected impact of working on investigating long-term responses to resource depletion and ecological collapse is very high.

Readers may dispute the precise way I have formulated each of these arguments or exactly how closely they all parallel the case for AI safety, however I hope they will see the basic point I am trying to drive at. Specifically, if effective altruists are focused on AI safety essentially because of expert belief in plausibility, large scope of the problem, and neglectedness of the issue, a similar case can be made with respect to working on overthrowing global capitalism, conducting research to determine which religious belief (if any) is most likely to be correct, and efforts to develop and implement responses to resource depletion and ecological collapse.

One response that I foresee is that none of these causes are really neglected because there are plenty of people focused on overthrowing capitalism, researching religion, and working on environmentalist causes, while very few people work on AI safety. But remember, outsiders would likely say that AI safety is not really neglected because billions of dollars are invested into AI research by academics and tech companies around the world. The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.

Potential alternative causes are neglected

I suspect that at this point many of my readers will at this point be mentally marshaling additional arguments as to why AI safety research is in fact a more worthy cause than the other three I have mentioned. Doubtless there are many such arguments that one could present, and probably I could devise counterarguments to at least some of them – and so the debate would progress. My point is not that the candidate causes I have presented actually are good causes for EAs to work on, or that there aren’t any good reasons why AI safety (along with other emerging technologies) is a better cause. My point is rather that these reasons are not generally discussed by EAs. That is, the arguments generally presented for focusing on AI safety as a cause area do not uniquely pick out AI safety (and other emerging technologies like nanotechnology or bioengineered pathogens), but EAs making the case for AI safety essentially never notice this because their ideological preconceptions bias them towards focusing on new technologies, and away from the sorts of causes I mention here. Of course EAs do go into much more detail about the risks of new technologies than I have here, but the core argument for focusing in AI safety in the first place is not applied to other potential cause areas to see if (as I think it does) it could also apply to those other causes.

Furthermore, it is not as if effective altruists have carefully considered these possible cause areas and come to the reasoned conclusion that they are not the highest priorities. Rather, they have simply not been considered. They have not even been on the radar, or at best barely on the radar. For example, I searched for ‘resource depletion’ on the EA forums and found nothing. I searched for ‘religion’ and found only the EA demographics survey and an article about whether EA and religious organisations can cooperate. A search for ‘socialism’ yielded one article discussing what is meant by ‘systemic change’, and one article (with no comments and only three upvotes) explicitly outlining an effective altruist plan for socialism.

This lack of interest in other cause areas can also be found in the major EA organisations. For example, the stated objective of the global priorities institute is:

To conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible. We prioritise topics which are important, neglected, and tractable, and use the tools of multiple disciplines, especially philosophy and economics, to explore the issues at stake.

On the face of it this aim is consistent with all three of the suggested alternative cause areas I outlined in the previous section. Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm. While AI safety is not discussed extensively it is mentioned a number of times, and much of the research agenda appears to be developed around related questions in philosophy and economics that the long-termism paradigm gives rise to. Religion and socialism are not mentioned at all in this document, while resource depletion is only mentioned indirectly by two references in the appendix under ‘indices involving environmental capital’.

Similarly the Future of Humanity Institute focuses on AI safety, AI governance, and biotechnology. Strangely, it also pursues some work on highly obscure topics such as the aestivation solution to the Fermi paradox and on the probability of Earth being destroyed by microscopic black holes or metastable vacuum states. At the same time, nothing about any of the potential new problem areas I have mentioned.

Under their problem profiles, 80,000 hours does not mention having investigated anything relating to religion or overthrowing global capitalism (or even substantially reforming global economic institutions). They do link to an article by Robert Wiblin discussing why EAs do not work on resource scarcity, however this is not a careful analysis or investigation, just his general views on the topic. Although I agree with some of the arguments he makes, the depth of analysis is very shallow relative to the potential risks and concern raised about this issue by many scientists and writers over the decades. Indeed, I would argue that there is about as much substance in this article as a rebuttal of resource depletion as a cause area as one finds in the typical article dismissing AI fears as exaggerated and hysterical.

In yet another example, the Foundational Research Institute states that:

Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research. Currently, our research focuses on reducing risks of dystopian futures in the context of emerging technologies. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.

Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on, and instead focuses their attention on the risks of emerging technologies. As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse.

Outside of the main organisations, there has been some discussion about socialism as an EA cause, for example on r/EffectiveAltruism and by Jeff Kaufman. I was able to find little else about either of the two potential cause areas I outline.

Overall, on the basis of the foregoing examples I conclude that the amount of time and energy spent by the EA community investigating the three potential new cause areas that I have discussed is negligible compared to the time and energy spent investigating emerging technologies. This is despite the fact that most of these groups are not ostensibly established with the express purpose of reducing the harms of emerging technologies, but have simply chosen this cause area over other possibilities would that also potentially fulfill their broad objectives. I have not found any evidence that this choice is the result of early investigations demonstrating that emerging technologies are far superior to the cause areas I mention. Instead, it appears to be mostly the result of disinterest in the sorts of topics I identify, and a much greater ex ante interest in emerging technologies over other causes. I present this as evidence that the primary reason effective altruism focuses so extensively on emerging technologies over other speculative but potentially high impact causes, is because of the privileging of certain viewpoints and answers over others. This, in turn, is the result of the underlying ideological commitments of many effective altruists.

What is EA ideology?

If many effective altruists share a common ideology, then what is the content of this ideology? As with any social movement, this is difficult to specify with any precision and will obviously differ somewhat from person to person and from one organisation to another. That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology:

  1. The natural world is all that exists, or at least all that should be of concern to us when deciding how to act. In particular, most EAs are highly dismissive of religious or other non-naturalistic worldviews, and tend to just assume without further discussion that views like dualism, reincarnation, or theism cannot be true. For example, the map of EA concepts has listed under ‘important general features of the world’ pages on ‘possibility of an infinite universe’ and ‘the simulation argument’, yet no mention of the possibility that anything could exist beyond the natural world. It requires a very particular ideological framework to regard the simulation as is more important or pressing than non-naturalism.
  2. The correct way to think about moral/ethical questions is through a utilitarian lens in which the focus is on maximising desired outcomes and minimising undesirable ones. We should focus on the effect of our actions on the margin, relative to the most likely counterfactual. There is some discussion of moral uncertainty, but outside of this deontological, virtue ethics, contractarian, and other approaches are rarely applied in philosophical discussion of EA issues. This marginalist, counterfactual, optimisation-based way of thinking is largely borrowed from neoclassical economics, and is not widely employed by many other disciplines or ideological perspectives (e.g. communitarianism).
  3. Rational behaviour is best understood through a Bayesian framework, incorporating key results from game theory, decision theory, and other formal approaches. Many of these concepts appear in the idealised decision making section of the map of EA concepts, and are widely applied in other EA writings.
  4. The best way to approach a problem is to think very abstractly about that problem, construct computational or mathematical models of the relevant problem area, and ultimately (if possible) test these models using experiments. The model appears to be of how research is approached in physics with some influence from analytic philosophy. The methodologies of other disciplines are largely ignored.
  5. The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change. This is clear from the overwhelming focus on technological change of top EA organisations, including 80,000 hours, the Center for Effective Altruism, the Future of Humanity Institute, the Global Priorities Project, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Machine Intelligence Research Institute.

I’m sure others could devise different ways of describing EA ideology that potentially look quite different to mine, but this is my best guess based on what I have observed. I believe these tenets are generally held by EAs, particularly those working at the major EA organisations, but are generally not widely discussed or critiqued. That this set of assumptions is fairly specific to EA should be evident if one reads various criticisms of effective altruism from those outside the movement. Although they do not always express their concerns using the same language that I have, it is often clear that the fundamental reason for their disagreement is the rejection of one or more of the five points mentioned above.

Conclusion

My purpose in this article has not been to contend that effective altruists shouldn’t have an ideology, or that the current dominant EA ideology (as I have outlined it) is mistaken. In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology. It doesn’t follow from this that any ideology is equally justified, but how we adjudicate between different ideological frameworks is beyond the scope of this article.

Instead, all I have tried to do is argue that effective altruists do in fact have an ideology. This ideology leads them to privilege certain questions over others, to apply particular theoretical frameworks to the exclusion of others, and to focus on certain viewpoints and answers while largely ignoring others. I have attempted to substantiate my claims by showing how different ideological frameworks would ask different questions, use different theoretical frameworks, and arrive at different conclusions to those generally found within EA, especially the major EA organisations. In particular, I argued that the typical case for focusing on AI safety can be modified to serve as an argument for a number of other cause areas, all of which have been largely ignored by most EAs.

My view is that effective altruists should acknowledge that the movement as a whole does have an ideology. We should critically analyse this ideology, understand its strengths and weaknesses, and then to the extent to which we think this set of ideological beliefs is correct, defend it against rebuttals and competing ideological perspectives. This is essentially what all other ideologies do – it is how the exchange of ideas works. Effective altruists should engage critically in this ideological discussion, and not pretend they are aloof from it by resorting to the refrain that ‘EA is a question, not an ideology’.

Comments46
Sorted by Click to highlight new comments since: Today at 5:03 AM

You are, of course, right: effective altruism is an ideology by most definitions of ideology, and you give a persuasive argument of that.

But I also think it misses the most valuable point of saying that it is not.

I think what Helen wrote resonates with many people because it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology. Effective altruism stays away from the worst tribalism of other -isms by being able to continually refresh itself by asking the simple question, "how can I do the most good?"

When we ask this question we don't get so tied up in what others think, what is expected of us, and what the "right" answer is. We can simply ask, right here and right now, given all that I've got, what can I do that will do the most good, as I judge it? Simple as that we create altruism through our honest intention to consider the good and effectiveness through our willingness to ask "most?".

Further, thinking of effective altruism as more question than ideology is valuable on multiple fronts. When I talk to people about EA, I could talk about Singer or utilitarianism or metaethics, and some times for some people those topics are the way to get them engaged, but I find most people resonate most with the simple question "how can we do the most good?". It's tangible, it's a question they can ask themselves, and it's a clear practice of compassion that need not come with any overly strong pre-conceived notions, and so everyone feels they can ask themselves the question and find an answer that may help make the world better.

When we approach EA this way, even if it doesn't connect for someone or even if they are confused in ways that make it hard for them to be effective, they still have the option to engage in it positively as a practice that can lead them to more effectiveness and more altruism over time. By contrast, if they think of EA as an ideology that is already set, they see themselves outside it and with no path to get in, and so leave it off as another thing they are not part of or is not a part of them—another identity shard in our atomized world they won't make part of their multifaceted lives.

And for those who choose not to consider the most good, seeing that there are those who ask this question my seem silly to them, but hardly threatening. An ideology can mean an opposing tribe you have to fight against so your own ideology has the resources to win. A question is just a question, and if a bunch of folks want to spend their time asking a question you think you already know the answer to, so much the better that you can offer them your answer and so less the worse that they pose a threat, those silly people wasting time asking a question. EA as question is flexibility and strength and pliancy to overcome those who would oppose and detract from our desire to do more good.

And that I think is the real power of thinking of EA as more question than ideology: it's a source of strength, power, curiosity, freedom, and alacrity to pursue the most good. Yes, it may be that there is an ideology around EA, and yes that ideology may offer valuable insights into how we answer the question, but so long as we keep the question first and the ideology second, we sustain ourselves with the continually renewed forces of inquiry and compassion.

So, yes, EA may be an ideology, but only by dint of the question that lies at its heart.

I think many of us want EA to be more of a question than an ideology, but if we try to describe how the community works today, it's better described as an ideology than just a question.

Or you could say that EA is an ideology that has tolerance, open-mindedness and skepticism as some of its highest values. Saying that EA is an ideology doesn't necessarily mean that it shares the same flaws as most other ideologies.

it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology.

Can you expand a bit on this statement? I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted. When I personally try to advocate against the idea that AI Safety is an effective cause, I experience quite some social disapproval for that within EA.

I think the points you're complaining about affect EA just as much as any other ideology, but that they are hard to see when you are in the midst of it. Your own politics and groupthink don't feel like politics and groupthink, they feel like that is the way the world is.

Let me try to illustrate this using an example. Plenty of people accuse any piece of popular media with a poc/female/lgbt protagonist as being overly political, seemingly thinking that white cishet male protagonists are the unique non-political choice. Whether you like this new trend or not, it is absurd to think that one position here is political and the other isn't. But your own view always looks apolitical from the inside. For EA this phenomenon might be compounded by the fact that there is no singular opposing ideology.

I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted.

This post of yours is at +28. The most upvoted comment is a request to see more stuff from you. If EA was an ideology, I would expect to see your post at a 0 or negative score.

There's no shortage of subreddits where stuff that goes against community beliefs rarely scores above 0. I would guess most subreddits devoted to feminism & libertarianism have this property, for instance.

Sure, this is the ideology part that springs up and people end up engaging with. Thinking of EA as a question can help us hew to a less political, less assumption-laden approach, but this can't stop people entirely from forming an ideology anyway and hewing to that instead, producing the types of behaviors you see (and that I'm similarly concerned about, as I've noticed and complained about similar voting patterns as well).

The point of my comment was mostly to save the aspiration and motivation for thinking of EA as a question rather than ideology, as I think if we stop thinking of it as a question it will become nothing more than an ideology and much of what I love about EA today would then be lost.

I see Helen's post as being more prescriptive than descriptive. It's something to aspire to, and declaring that "Effective Altruism is an Ideology" feels like giving up. Instead of "defending" against "competing" ideological perspectives, why not adopt the best of what they have to offer?

I also think you're being a little unfair. Time & attention for evaluating ideas & publishing analysis is limited, and in several cases there is work you don't seem aware of.

I'll grant that EA may have an essentially consequentialist outlook (though even on this point, I'd argue many EAs are too open to other moral philosophies to qualify for the adjective "ideological"; see e.g. the discussion of non-consequentialist ethics in this podcast with EA co-founder Will MacAskill).

But some of your other claims feel too strong to me. For example, even if it's true that no EA organization has ever made use of ethnography, I don't think that's because we're ideologically opposed to ethnography in the way that, say, libertarians are ideologically opposed to government coercion. As anonymous_ea points out, ethnography was just recently a topic of interest here on the forum. It seems plausible to me that we're talking about and making use of ethnography at about the same rate as the research world at large (that is to say, not very much).

Similarly, using phenomenology to determine the value of different types of life sounds like Qualia Research Institute, and I believe CEA has examined historical case studies related to social movements. Just because you aren't personally aware of it doesn't mean someone in EA isn't doing it, and it certainly doesn't mean EA is ideologically opposed to it.

With regard to "devising and implementing alternatives to global capitalism", 80k did a podcast on that. This is the sort of podcast I'd expect to see in the world where EA is a question, and 80k is always talking to experts in different areas, exploring new possible cause areas for EA. Here's a post on socialism you might be interested in.

Similarly, there is an effective environmentalism group with hundreds of members in it. Here is a post on an EA blog attempting address more or less exactly the issue you outline ("serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not") with regard to environmentalism. And at a recent EA conference, I attended a presentation which argued that global warming should be a higher priority for EAs.

It doesn't feel to me like EAs are ideologically opposed to environmentalism with anything like the vigor with which feminists and libertarians ideologically oppose things. Instead it seems like EAs investigate environmentalism, and some folks argue for it and work on it, but those arguments haven't been strong enough to make environmentalism the primary focus of most EAs. 80k places global warming under the category of "areas that are especially important but somewhat less neglected".

Anyway, an argument that uniquely picks out AI safety is: If we can solve AI safety and create a superintelligent FAI, it can solve all the other problems on your list. I don't think this argument is original to me; I suspect it came up when FHI did research on which existential risks to focus on many years ago. A quick look at the table of contents of this book shows FHI spent plenty of time considering existential risks unrelated to new technologies. I think OpenPhil did their own broad research and ended up coming to conclusions similar to FHI's.

With regard to the Global Priorities Institute, and the importance of x-risk, longtermism has received a fair amount of discussion. Nick Beckstead wrote an entire PhD thesis on it.

Regarding the claim that emerging technologies are EA's main focus, I want to highlight these results from the EA Survey which found that "Global Poverty remains the most popular single cause in our sample as a whole". Note that the fourth most popular cause is cause prioritization. You write: "My point is not that the candidate causes I have presented actually are good causes for EAs to work on". However, if we're trying to figure out whether we should devote even more resources to investigating unexplored causes to do the most good, beyond what's currently going into cause prioritization, the ease of finding good causes which are currently ignored seems like an important factor. In other words, I don't see a compelling argument here that cause prioritization should receive more attention than it currently receives.

In addition to being a question, EA is also a community and a memeplex. It's important to listen to people outside the community in case people are self-selecting in or out based on incidental factors. And I believe in upvoting uncommon perspectives on this forum to encourage a diversity of opinions. But let's not give up and start calling ourselves an ideology. I would rather have an ecosystem of competing ideas than a body of doctrine--and luckily, I think we're already closer to an ecosystem, so let's keep it that way.

It's important to listen to people outside the community in case people are self-selecting in or out based on incidental factors.

Yet anything which is framed as an attack or critique on EA is itself something that causes people to self-select in or out of the community. If someone says "EAs have statistics ideology" then people who don't like statistics won't join. It becomes an entrenched problem from founder effects. Sort of a self-fulfilling prophecy.

What is helpful is to showcase people who actual work on things like ethnography. That's something that makes EA more methodologically diverse.

But stuff like this is just as apt to make anyone who isn't cool with utilitarianism / statistics / etc say they want to go elsewhere.

People who aren't "cool with utilitarianism / statistics / etc" already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.

I've met a great number of people in EA who disagree with utilitarianism and many people who aren't particularly statistically minded. Of course it is not equal to the base rates of the population, but I don't really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth.

If you're interested in ethnologies, sociology, case studies, etc - then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible.

If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn't write this essay with the framing of "how to increase the uptake of EA among non-mathematical (etc) people" (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.

[anonymous]5y37
1
0

This is a good article and a valuable discussion to have. I have a couple of nitpicks on the discussion of theoretical frameworks that tend to be ignored by EA. You mentioned the following examples:

Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.
  • I have never seen any sociological analysis in EA and agree that it's been (almost?) completely ignored.
  • Ethnographies have been almost completely absent from EA from all of its history, with the exception of a recent small increase in interest. A recent upvoted, positively received, and discussed post on this forum expressed interest in an ethnography of EA. A comment on that post mentioned a couple of ethnographies of EAs, including this one of London EA.
  • Phenomenology and existentialism: I'm not sure what this means. EA has spent a fair amount of time thinking about the value of different types of biological and synthetic lifeforms (e.g. wildlife suffering, suffering in AI). The second example seems a bit underdefined to me. I'm not familiar with phenomenology and existentialism and might be misunderstanding this section.
  • For historical case studies, I think "mostly ignored" is misleading. A more accurate description might be that they're underutilized relative to their full potential and relative to other frameworks, but they're taken seriously when they come up.

As you mention, historical case studies have been used in x-risk analysis. The Open Philanthropy Project has also commissioned and written about several historical case studies , going back to its GiveWell Labs days. Their page on the History of Philanthropy says:

We’ve found surprisingly little existing literature on the history of philanthropy. In particular, we’ve found few in-depth case studies examining questions like what role philanthropists, compared with other actors, played in bringing important changes to pass. To help fill that gap, we are commissioning case studies on past philanthropic success stories, with a focus on cases that seem — at first glance — to be strong examples of philanthropy having a major impact on society.

They go on to list 10+ case studies (only one focusing on x-risks), including some from a $165k grant to the Urban Institute specifically for the purpose of producing more case studies. Of course $165k is a small amount for OpenPhil, but it seems to me, for a few reasons, that they take this work seriously.

The Sentience Institute has published 6 reports, 3 of which are historical case studies. Historical case studies relating to nuclear war, like the Petrov and Arkhipov incidents, have been widely discussed in EA as well. The Future of Life Institute has published some material relating to this.

  • Regression analysis: I find this example puzzling. Regressions are widely used in development economics, which has heavily influenced EA thinking on global health. EAs who are professional economists or otherwise have reason to use regressions do so when appropriate (e.g. Rachel Glennester and J-PAL, some of Eva Vivalt's work, etc). GiveWell's recommendation of deworming charities is largely dependent on the regression estimates of a couple of studies.

More generally, regressions are a subset of statistical analysis techniques. I'm not sure if EA can be credibly accused of ignoring statistical analysis. I also don't think the other examples you gave of uses of regressions (political reform, AI and nuclear policy) are a great fit for regression analysis or statistical analysis in general.


[Disclaimer: I used to be the Executive Director of the Foundational Research Institute, and currently work at the Future of Humanity Institute, both of which you mention in your post. Views are my own.]

Thank you so much for writing this! I wish I could triple-upvote this post. It seems to fit very well with some thoughts and unarticulated frustrations I've had for a while. This doesn't mean I agree with everything in the OP, but I feel excited about conversations it might start. I might add some more specific comments over the next few days.

[FWIW, I'm coming roughly from a place of believing that (i) at least some of the central 'ideological tenets' of EA are conducive to the community causing good outcomes, (ii) the overall ideological and social package of EA making me more optimistic about the EA community causing good outcomes per member than about any other major existing social and ideological package. However, I think these are messy empirical questions we are ultimately clueless about. And I do share a sense that in at least some conversations within the community it's not being acknowledged that these are debatable questions, and that the community's trajectory is being and will be affected by these implicit "ideological" foundations. (Even though I probably wouldn't have chosen the term "ideology".)

I do think that an awareness of EA's implicit ideological tenets sometimes points to marginal improvements I'd like the community to make. This is particularly true for more broadly investigating potential long-termist cause areas, including ones that don't have to do with emerging technologies. I also suspect that qualitative methodologies from the social sciences and humanities are currently being underused, e.g. I'd be very excited to see thoroughly conducted interviews with AI researchers and certain government staff on several topics.

Of course, all of this reflects that I'm thinking about this in a sufficiently outcome-oriented ethical framework.

My perception also is that within the social networks most tightly coalescing around the major EA organizations in Oxford and the Bay Area it is more common for people to be aware of the contingent "ideological" foundations you point to than one would maybe expect based on published texts. As a random example, I know of one person working at GPI who described themselves as a dualist, and I've definitely seen discussions around "What if certain religious views are true?" - in fact, I've seen many more discussions of the latter kind than in other mostly secular contexts and communities I'm familiar with.]

I wish I could triple-upvote this post.

You can! :P. Click-and-hold for "strong upvote."

I already did this. - I was implicitly labelling this "double upvote" and was trying to say something like "I wish I could upvote this post even more strongly than with a 'strong upvote'". But thanks for letting me know, and sorry that this wasn't clear. :)

I think it is good to keep vigilant and make sure we are not missing good cause areas. However, I think that your examples are not actually neglected. Using this scale, ~$10 billion per year or more is not neglected.

The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.

Below I try to look at actual effort to resolving the problem effectively, so it is not analogous to the total amount of money put into artificial intelligence. For resource depletion, the total amount of money put in could include all the money spent on extracting resources. So I’m just looking at efforts towards resource sustainability, which would be analogous to artificial intelligence safety.

One flaw with the 80,000 hours scale is that it does not take into account historic work. Historically, there has been a tremendous amount of effort devising and implementing practical alternatives to capitalism. So averaged over the last century, it would be far more than $10 billion per year.

Before climate change reached prominence in the environmental movement, much effort was directed at recycling and renewable energy to address the resource depletion issue. Even now some effort in these areas is directed at resource depletion. In previous decades, there was a large amount of effort on family planning, partly because of resource depletion issues, and even now billions of dollars are being spent per year on this. So I am quite sure more than $10 billion per year is being spent on addressing resource depletion.

I’m not easily finding the number of philosophers of religion. However, I know that many people who are disillusioned with the religion they grew up with do not just jump to atheism, and instead try to find the true religion (at least for certain period of time). So if you add up all the effort hours with some reasonable wage, I’m pretty sure it would average more than $10 billion per year over the last century.

So the neglectedness of these cause areas just cannot compare to that of artificial intelligence safety, which is only tens of millions of dollars per year (and very small more than a decade ago). Of course it is still possible that there are good opportunities within these cause areas, but it is just much less likely than in more neglected cause areas.

[80k and others claim that]: "The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change."

1) Why do you assume this is ideological (in the sense of not being empirically grounded)?

Anecdotally, I was a protest-happy socialist as a teenager, but changed my mind after reading about the many historical failures, grasping the depth of the calculation problem, and so on. This at least felt like an ideological shift dependent on facts.


2) 80,000 Hours have been pushing congressional staffer and MP as top careers (literally #1 or #2 on the somewhat-deprecated quiz) for years. And improving institutions is on their front page of problem areas.

Regarding 1), if I were to guess which events of the past 100 years made the most positive impact on my life today, I'd say those are the defeat of the Nazis, the long peace, trans rights and women's rights. Each of those carries a major socio-political dimension, and the last two arguably didn't require any technological progress.

I very much think that socio-political reform and institutional change are more important for positive long-term change than technology. Would you say that my view is not empirically grounded?

kbog
5y10
0
0

It's better to look at impacts on the broad human population rather than just one person.

Sure it is, but I know a lot more about myself than I do about other people. I could make a good guess on impact on myself of a worse guess on impact on others. It's a bias/variance trade-off of sorts.

I'd say the two are valuable in different ways, not that one is necessarily better than the other.

If you understand economic and political history well enough to know what's really gotten you where you are today, then you already have the tools to make those judgments about a much larger class of people. Actually I think that if you were to make the arguments for exactly how D-Day or women's rights for instance helped you then you would be relying on a broader generalization about how they helped large classes of people.

Good call. I'd add organised labour if I was doing a personal accounting.

We could probably have had trans rights without Burou's surgeries and HRT but they surely had some impact, bringing it forward(?)

No, I don't have a strong opinion either way. I suspect they're 'wickedly' entangled. Just pushing back against the assumption that historical views, or policy views, can be assumed to be unempirical.

Is your claim (that soc > tech) retrospective only? I can think of plenty of speculated technologies that swamp all past social effects (e.g. super-longevity, brain emulation, suffering abolitionism) and perhaps all future social effects.

Any technology comes with its own rights struggle. Universal access to super-longevity, the issue of allowing birth vs exploding overpopulation if everyone were to live many times longer, em rights, just to name a few. New tech will hardly have any positive effect if these social issues resolve in a wrong way.

Fair. But without tech there would be much less to fight for. So it's multiplicative.

I'm not sure it's possible for me to distinguish between tech and social change. How can I talk about women's rights without talking about birth control (or even just tampons!)?

kbog
5y13
0
0
Some questions are privileged over others.
There are particular theoretical frameworks for answering questions and analysing situations.
As a result of 1 and 2, certain viewpoints and answers to questions are privileged, while others are neglected as being uninteresting or implausible.

Bad definition. The study of global warming is an "ideology" by these lights.

Questions about the climate are privileged over questions about metallurgy. There are particular theoretical frameworks. Certain viewpoints and answers (like "CO2 emissions cause global warming") are privileged, while others are neglected as being uninteresting ("temperature increase will reduce the value of my house") or implausible ("global warming is a Marxist conspiracy").

You've defined 'ideology' in a way that encompasses basically any system of rational inquiry. This is very far from the common usage of the term "ideology".

If we're going to worry about what is or isn't an 'ideology' at all (which I think is a bad line of inquiry, but here we are) then we should define it as a mix of normative and empirical claims. Feminism wouldn't be an ideology if feminists simply said "let's do whatever it takes to increase the status of women." Libertarianism wouldn't be an ideology if libertarians simply said "let's do whatever it takes to maximize individual liberty." Some people like that definitely exist, but they're not being ideological about it. It's ideological when the mass of the movement takes empirical claims for granted, mixing them in with the normative.

So the argument seems wrong right off the bat, but let's go through the rest.

What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?

None of these are plausible as questions that would define EA. The question that defines EA will be something like "how can we maximize welfare with our spare time and money?" Questions 3 and 4 are just pieces of answering this question, just like asking "when will AGI be developed" and other more narrow technical questions. Questions 1 and 2 are misstatements as "moral duties" is too broad for EA and "virtue" is just off the mark. The correct answer to Q1 and Q2, leaving aside the issues with your assumption of 3 specific cause areas, is to "go to EA, and ask the question that they ask, and accept the answer."

While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.

It's certainly true that, for instance, negative utilitarians will ask about reducing suffering, and regular utilitarians will ask about maximizing overall welfare, so yes people with different moral theories will ask different questions.

And empirical claims are not relevant here. Instead, moral claims (utilitarianism, etc) beget different questions (suffering, etc).

You can definitely point out that ideologies, also, beget different questions. But saying that EAs have an ideology because they produce certain questions is affirming the consequent.

effective altruists tend to have a very particular approach to answering these questions.

Climate scientists do not go to sociology or ethnography or phenomenology (and so on) to figure out why the world is warming. That's because these disciplines aren't as helpful for answering their questions. So here we see how your definition of ideology is off the mark: having a particular approach can be a normal part of a system of inquiry.

Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing,

Very strong norms? I have never seen anything indicating that we have very strong norms. You have not given any evidence of that - you've merely observed that we tend to do things in certain ways.

despite the fact that relatively little time is spent in the community discussing these issues.

That's because we're open for people to do whatever they think can work. If you want to use ethnography, go ahead.

I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions

First, just because EAs have ideological preconceptions doesn't mean that EA as a movement and concept are ideological in character. Every group of people has many people with ideological preconceptions. If this is how you define ideologies then again you have a definition which is both vacuous and inconsistent with the common usage.

Second, there are a variety of reasons to adopt a certain methodological approach other than an ideological preconception. It could be the case that I simply don't have the qualifications or cognitive skills to take advantage of a certain sort of research. It could be the case that I've learned about the superiority of a particular approach somewhere outside EA and am merely porting knowledge from there. Or it could be the case that I think that selecting a methodology is actually a pretty easy task that doesn't require "extensive analysis of pros and cons."

My point is rather that these reasons are not generally discussed by EAs.

For one thing, again, we could know things from valid inquiry which happened outside of EA. You keep hiding behind a shield of "sure, I know all you EAs have lots of reasons to explain why these are inferior causes" but that neglects the fact that many of these reasons are circulated outside of EA as well. We don't need to reinvent the wheel. That's not to say that there's no point discussing them; of course there is. But the point is that you don't have grounds to say that this is anything 'ideological.'

And we do discuss these topics. Your claim "They have not even been on the radar" is false. I have dealt with the capitalism debate in an extensive post here; turns out that not only does economic revolution fail to be a top cause, but it is likely to do more harm than good. Many other EAs have discussed capitalism and socialism, just not with the same depth as I did. Amanda Askell has discussed the issue of Pascal's Wager, and with a far more open-minded attitude than any of the atheist-skeptic crowd have. I have personally thought carefully about the issue of religion as well; my conclusion is that theological ideas are too dubious and conflicting to provide good guidance and mostly reduce to a general imperative to make the world more stable and healthy (for one thing, to empower our future descendants to better answer questions about phil. religion and theology). Then there is David Denkenberger's work on backup food sources, primarily to survive nuclear winter but also applicable for other catastrophes. Then there is our work on climate change which indirectly contributes to keeping the biosphere robust. Instead of merely searching the EA forum, you could have asked people for examples.

Finally, I find it a little obnoxious to expect people to preemptively address every possible idea. If someone thinks that a cause is valuable, let them give some reasons why, and we'll discuss it. If you think that an idea's been neglected, go ahead and post about it. Do your part to move EA forward.

Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm

They didn't assume that ex nihilo. They have reasons for doing so.

Strangely, it also pursues some work on highly obscure topics such as the aestivation solution to the Fermi paradox

The Fermi Paradox is directly relevant to estimating the probability of human extinction and therefore quite relevant for judging our approach to growth and x-risk.

or even substantially reforming global economic institutions

They recommend an economics PhD track and recently discussed the importance of charter cities. More on charter cities: https://innovativegovernance.org/2019/07/01/effective-altruism-blog/

Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on,

Huh? Tomasik's writings have extensively debunked naive ecological assumptions about reducing suffering, this has been a primary target from the beginning. It seems like you're only looking at FRI and not all of Tomasik's essays which form the background.

Anything which just affects humans, whether it's a matter of socialism or Buddhism or anything else, is not going to be remotely as important as AI and wildlife issues under FRI's approach. Tomasik has quantified the numbers of animals and estimated sentience; so have I and others.

As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse

You don't see how cooperation across universes is relevant for reducing suffering?

I'll spell it out in basic terms. Agents have preferences. When agents work together, more of their preferences are satisfied. Conscious agents generally suffer less when their preferences are satisfied. Lastly, the multiverse could have lots of agents in it.

That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology

Except we're all happy to revise them while remaining EAs, if good arguments and evidence appear. Except maybe some kind of utilitarianism, but as I said moral theories are not sufficient for ideology.

In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology

No, they constitute our priors.

We should critically analyse this ideology, understand its strengths and weaknesses

Except we are constantly dealing with analysis and argument of the strengths and weaknesses of utilitarianism, of atheism, and so on. This not anything new to us. Just because your search on the EA forum didn't turn anything up doesn't mean we're not familiar with them.

This is essentially what all other ideologies do – it is how the exchange of ideas works

No, all other ideologies do not critically analyze themselves and understand their strengths and weaknesses. They present their own strengths, and attack others' weaknesses. It is up to other people - like EAs - to judge the ideologies from the outside.

and not pretend they are aloof from it by resorting to the refrain that ‘EA is a question, not an ideology

This is just a silly comment. When we say that EA is a question, we're inviting you to tell us why utilitarianism is wrong, why Bayesianism is wrong, etc. This is exactly what you are advocating for.

kbog
5y12
0
0

Now I took a look at your claims about things we're "ignoring" like sociological theory and case studies. First, you ought to follow the two-paper rule, a field needs to show relevance. Not every line of work is valuable. Just because they have peer review and the author has a PhD doesn't mean they've accomplished something that is useful for our particular purposes. We have limited time to read stuff.

Second, some of that research really is irrelevant for our purposes. There are a lot of reasons why. Sometimes the research is just poor quality or not replicable or generalizeable; social theory can be vulnerable to this. Another issue is that here in EA we are worried about one movement and just a few high priority causes. We don't need a sweeping theory of social change, we need to know what works right here and now. This means that domain knowledge (e.g. surveys, experiments, data collection) for our local context is more important than obtaining a generic theory of history and society. Finally, if you think phenomenology and existentialism are relevant, I think you just don't understand utilitarianism. You want to look at theories of well-being, not these other subfields of philosophy. But even when it comes to theories of well-being, the philosophy is mostly irrelevant because happiness/preferences/etc match up pretty well for all practical intents and purposes (especially given measurement difficulties - we are forced to rely on proxy measures like GDP and happiness surveys anyway).

Third, your claim that these approaches are generally ignored is incorrect.

For sociological theory - see ACE's concepts of social change.

Historical case studies - See the work on history of philanthropy and field growth done by Open Phil. See the work of ACE on case studies for social movements. And I'm currently working on a project for case studies assessing the risk of catastrophic risks, an evaluation of historical societies to see what they knew and might have done to prevent GCRs if they had thought about it at the time.

Regression analysis - Just... what? This one is bizarre. No one here is ideologically biased against any particular kind of statistical model, we have cited plenty of papers which use regressions.

Overall, I'm pretty disappointed that the EA community has upvoted this post so much, when its arguments are heavily flawed. I think we are overly enthusiastic about anyone who criticizes us, rather than judging them with the same rigor that we judge anyone else.

Another example contradicting your claims about EA: in Candidate Scoring System I went to extensive detail about methodological pluralism. It nails all of the armchair philosophy-of-science stuff about how we need to be open minded about sociological theory and so on. I have a little bit of ethnography in it (my personal observations). It is where I delved into capitalism and socialism as well. It's written very carefully to dodge the all-too-predictable critique that you're making here. And what I have done to make it this way has taken up an extensive amount of time, adding no clear amount of accuracy to the results. So you can see it's a little annoying when someone criticizes the EA movement yet again, without knowledge of this recent work.

Yet, at the end of the day, most of the papers I actually cite are standard economics, criminological studies, political science, and so on. Not ethnographies or sociological theories. Know why? Because when I see ethnographies and sociological theory papers, and I read the abstract and/or the conclusion or skim the contents, I don't see them giving any information that matters. I can spend all day reading about how veganism apparently green-washes Israel, for instance, but that's not useful for deciding what policy stance to take towards Israel or farming. It's just commentary. You are making an incorrect assumption that every line of scholarship that vaguely addresses a topic is going to be useful for EAs who want to actually make progress on it. This is not a question of research rigor, it's simple facts about what these papers are actually aiming at.

You know what it would look like, if I were determined to include all this stuff? "Butler argues that gender is a social construct. However, just because gender is a social construct doesn't tell us how quality of life will change if the government bans transgender workplace discrimination. Yeates argues that migration transforms domestic caring into global labor chains. However, just because migration creates global care chains doesn't tell us how quality of life will change if the government increases low-skill immigration." And on and on and on. Of course it would be a little more nuanced and complex but you get the idea. Are you interested in sifting through pages of that sort of prose? Does it get us closer to understanding how to improve the world?

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

If we include Helen's piece, I think it might be worth including both this and the first comment on this post, to show some other aspects of the picture here. (I think all three pieces are getting at something important, but none of them is a great overall summary.)

I agree that in practice, EA does have an ideology. The majority of EAs share the assumptions of rationality, scientific materialism, utilitarianism and some form of techno-optimism. This explains why the three cause areas you mention aren't taken seriously by EA. And so if one wants to defend the current focus of most EAs, one also has to defend the assumptions - the ideology - that most EAs have.

However, in principle, EA does not prevent one from adopting the proposed cause areas. If I became convinced that the most effective way to do good was to fight for theocracy, Marxism and against the depletion of natural resources, I would do so and still call myself an EA.

Maybe it would be useful to distinguish between EA the general idea, which can be seen as just a question, and EA the real world movement. Though I’m not sure if defining EA the general idea as a question is really that accurate either. It seems to me that in practice, the ideology of rationality, utilitarianism... comes first, and EA is just the injunction to actually do something about it. “If you think that a rational, consequentialist approach to utilitarianism constitutes ‘good’, then be consistent and act accordingly.” And so maybe EA is at its core an ideological movement and the question of “How can I do the most good, with the resources available to me?” is just one of its tenets.

Hi everyone, thanks for your comments. I'm not much for debating in comments, but if you would like to discuss anything further with me or have any questions, please feel free to send me a message.

I just wanted to make one clarification that I feel didn't come across strongly in the original post. Namely, I don't think its a bad thing that EA is an ideology. I do personally disagree with some commonly believed assumptions or methodological preferences etc, but the fact that EA itself is an ideology I think is a good thing, because it gives EA substance. If EA were merely a question I think it would have very little to add to the world.

The point of this post was therefore not to argue that EA should try to avoid being an ideology, but that we should realise the assumptions and methodological frameworks we typically adopt as an EA community, critically evaluate whether they are all justified, and then to the extent they are justified defend them with the best arguments we can muster, of course always remaining open-minded to new evidence or arguments that might change our minds.

I liked this post, and agree with many of these comments regarding types of analysis that are less common within EA.

However, I'll make the same comment here that I do on many other posts: Given that EA doesn't have much of (thing X), how should we get more of it?

For your post, my questions are:

  • Which of the types of analysis you mentioned, if any, do you think would be most useful to pursue? You could make an argument for this based on the causes you think are most important to gain information about, the types of analysis you think EA researchers are best-suited to pursue, the missions of the organizations best-positioned to make new research grants, etc.
  • Are there types of EA research you think should be less popular? Do the research agendas of current EA orgs overlap in ways that could be "fixed" by one org agreeing to move in a different direction? Does any existing research stand out as redundant, or as "should have been done using/incorporating other methodologies"?
  • Are there fields you think have historically been better at getting "correct answers" than EA about certain fields that are or ought to be interesting to EAs -- or, if not "better", at least "getting some correct answers EA missed or would have missed"? What are those answers?
    • This question might run headlong into problems around methodologies EA doesn't use or give credit to, but I'd hope that certain answers derived by methods unpopular within EA might still be "verifiable" by EA, e.g. by generating results the movement can appreciate/understand.

This is a good question, but it makes a bit of a leap, and I don't think answers to it should have been included in the original article. The article doesn't actually say EA shouldn't be ideological, just that it is. I read it as descriptive, not prescriptive. I think the article was strong just pointing out ideological aspects of EA and letting readers think about whether they're happy with that.

I don't think the post was wrong not to address any of these questions (they would all require serious effort to answer). I only meant to point out that these are questions which occurred to me as I read the post and thought about it afterward. I'd be happy if anything in my response inspired someone to create a follow-up post.

I misunderstood - thanks for clarifying!

Very big fan of this post. It is one of the best, substantial critiques of EA as it currently is that I've read/heard in a while. There are lots of parts that I'd love to delve into more but I'll focus on one here, which seems to be one of the most important claims:

The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas.

Sure, but this seems to miss out the tractability consideration. Your post barely mentions tractability or cost-effectiveness and the INT framework is really just a way of thinking about the expected value / cost-effectiveness of particular actions. I'd guess that some of the areas you list have been ignored or rejected by many aspiring EAs because they just seem less tractable at first glance.

I do think it's important that the EA community explores plausibly high-impact cause areas, even if only a handful of individuals ever focus on them. So I'd be pleased if people took this post as an encouragement to explore the tractability of contributions to various areas that have been neglected by the EA community so far.

I gave this post a strong upvote. It articulated something which I feel but have not articulated myself. Thank you for the clarity of writing which is on display here.

That said, I have some reservations which I would be interested in your thoughts on. When we argue about whether something is an ideology or not, we are assuming that the word "ideology" is applied to some things and not others, and that whether or not it is applied tells us useful things about the things it is applied to.

I am convinced that on the spectrum of movements, we should put effective altruism closer to libertarianism and feminism than the article you're responding to would indicate. But what is on the other end of this spectrum? Is there a movement/"ism" you can point to that you'd say we should put on the other side of where we've put EA -- **less** ideological than it?

I really like this post. Thanks for writing it!

I suspect that an even easier way to get to the conclusion that EA is an ideology is to just try to come up with any plausible statement of what the concept means. At a minimum, I think the concept includes:

  1. A system or set of beliefs that tend to co-occur.
  2. Some set of goals which the beliefs are trying to accomplish

EAs tend to have a shared set of goals and tend to have shared beliefs, so EA is an ideology.

William MacAskill says the following in a chapter in The Palgrave Handbook of Philosophy and Public Policy:

As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis. So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good.

But then he continues to highlight various normative commitments, which indicate that it is, in addition to being a question, an ideology:

The project is: • Maximizing. The point of the project is to try to do as much good as possible. • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models. • Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals. • Impartial. Everyone’s welfare is to count equally.

Normative commitments aren't sufficient to show that something is an ideology. See my comment. Arguably 'science-aligned' is methodological instead but it's very vague and personally I would not include it as part of the definition of EA.

There are probably more than enough comments already, but I just have to say that I think EA's ideology is not essential and not set in stone.

The ideological aspects are secondary to the question of how to do the most good. Therefore in theory EA can change its ideology without changing its essence, which may set it apart from other ideologies.

 

P.S. I loved this post, thanks so much for writing it out. I can't help but be convinced by most of the post: EA may not be an ideology, but it's been acting like one as of late.

I only skimmed part of this, but I would agree that EA is an ideology, and would add that 'ideology' is a fairly vague term. I include science and religion as examples of ideological movements, as well as current 'popular', 'social justice' and 'environmental' movements (e.g. extinction rebellion, anti-capitalists, yellow vests, alt-right) . Maybe the 2 best sources for this view might K Mannheim's 'Ideology and Utopia', and H White's 'Metahistory'---though both of them are vague and out of date . (I might add Pierre Bourdieu.)

EA has a specific kind of language---based mostly on philosophy, though apparently many people are either into IT or just have 'regular careers' (I met a police officer who works in a very rough area at an EA event, as well as an international poverty researcher).

The mathematics I've seen associated with EA , though they talk about it, is minimal---the only worthwhile math I've seen was a paper cited by a university grad student , and the paper was from a research center with no connection to EA. Most of the math in EA papers is what I call 'philosophical math'---similar to Godel's 'Ontological Proof of God'--just a philosophical paper, which shows 'God' can be represented by an 'Ultrafilter' (a mathematical term). (I think Godel didn't think he prove God existed, only that God was already implicit in any discussions of her or it. )

EA ideology is like christianity to me--and I view there to be 2 kinds of Christians who both claim the name---one kind (Episcopalians, MLK Baptists, liberal churches) say they belive love your neighbor as yourself. The iother kind says accept jesus as savior and bible as literal word of god and the rest follows---no LGBTQ+, womyn preachers, abortions, etc.

EA has similar divisions---some EA people have environmentalist sympathies (eg carbon sequestration, saving rainforests); others worry about 'insect and wild animal suffering' so they favor destruction of wilderness areas where animals suffer. Some do support both habitat destruction and renewable energy development (which by any basic logical analyses makes exactly zero sense---if they want to reduce wild animal suffering, then they should be against renewable energy and in favor of Canadian tar sand and Arctic oil developement, mountain top removal and strip oal mining , etc---because it destroys more natural habitat where insects and animals suffer).

I dought all people who identify with EA will ever have a constant set of beliefs any more than Christians will (one can remember that Martin Luther created the protestants which split from Catholics, yet retained term christian.)




... or attempts to rationalist differences on the basis of social stability or cohesion.

"rationalist" => "rationalize"

I'd like to promote a norm of suggesting typo corrections via private message, rather than in comments. This helps to keep comments free of clutter, especially on long posts that might have many typos. The only person interested in seeing a typo comment is likely to be the author.

You could argue that typo comments help the author avoid getting lots of PMs from different people that point out the same typos. For now, I'd guess that very few people will send such PMs regularly, and that we should favor PMs > comments for this. This may change as the Forum's readership increases, or if authors begin to complain about getting too many typo PMs.

Also:

I therefore content that these methodological choices are primarily the result of ideological preconceptions...

"content" => "contest" (I think? It's def a typo of some sort.)

Good post, by the way.

"contend"