Feedback welcome: www.admonymous.co/mo-putera
I work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, MEL, and general org-boosting to support policies that incentivise innovation and ensure access to antibiotics to help combat AMR. I was previously an AIM Research Program fellow, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results.
I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):
I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].
I think it's more nuanced than just "the EA movement neglects systemic change", since even as far back as 2015 Rob Wiblin at 80K could list all these systemic change initiatives:
Here are some people who identify as effective altruists working on systemic change:
- Most recent Open Philanthropy research and grants, on immigration reform, criminal justice reform, macroeconomics, and international development, are clearly focussed on huge structural changes of various kinds.
- The OpenBorders.info website collates research on and promotes the option of dramatic increases in migration from poor to rich countries.
- A new startup called EA Policy, recommended for financial support by my colleagues at EA Ventures, is testing the impact of making submissions to open policy forums held by the US Government during this summer.
- Our colleagues at the Global Priorities Project research what should be the most important reform priorities for governments, and how they can improve cost-benefit and decision-making processes.
- One of GiveWell’s main goals from the beginning, perhaps it’s primary goal, has been to change the cultural norms within the nonprofit sector, and the standards by which they are judged by donors. They wanted to make it necessary for charities to be transparent with donors about their successes and failures, and run projects that actually helped recipients. They have already significantly changed the conversation around charitable giving.
- Giving What We Can representatives have met with people in the UK government about options for improving aid effectiveness. One of its first and most popular content page debunks myths people cite when opposing development aid. One of the first things I wrote when employed by Giving What We Can was on the appropriate use of discounts rates by governments delivering health services. Until recently one Giving What We Can member, who we know well, was working at the UK’s aid agency DfID.
- Some 80,000 Hours alumni, most of whom unfortunately would rather remain anonymous, are going into politics, think-tanks, setting up a labour mobility organisations or businesses that facilitate remittance flows to the developing world.
- Several organisations focussed on existential risk (FHI, CSER and FLI jump to mind) take a big interest in government policies, especially those around the regulation of new technologies, or institutions that can improve inter-state cooperation and prevent conflict.
- 80,000 Hours alumni and effective altruist charities work on or donate to lobbying efforts to improve animal welfare regulation, such as Humane Society US-FARM. Other activists are working for dramatic society-wide changes in how society views the moral importance of non-human animals.
Rob's guesses at how this perception that EA neglects systemic change might've formed:
- ‘earning to give’ was one of our most media friendly and viral ideas, and has dominated coverage of 80,000 Hours and effective altruism among the general public, to our growing consternation. Earning to give is usually perceived as anti-systemic change.
In fact, someone who ‘earned to give’ in order to pay the salary of someone else working for systemic change is working for systemic change themselves. In that sense ‘earning to give’ is simply neutral on the systemic vs non-systemic change issue. Communist revolutionary Friedrich Engels is a classic example of this approach, though my guess is he personally did more harm than good.
I would also argue though that creating a social expectation that to be decent people, the rich should give away a large fraction of their wealth to others, is itself a form of systemic change.- Effective altruists are usually not radicals or revolutionaries, as is apparent from my list above. My attitude, looking at history, is that sudden dramatic changes in society usually lead to worse outcomes than gradual evolutionary improvements. I am keen to tinker with government or economic systems to make them work better, but would only rarely want to throw them out and rebuild from scratch. I personally favour maintaining and improving mostly market-driven economies, though some of my friends and colleagues hope we can one day do much better. Regardless, this temperament for ‘crossing the river by feeling the stones’ is widespread among effective altruists, and in my view that’s a great thing that can help us avoid the mistakes of extremists through history. The system could be a lot better, but one only need look at history to see that it could also be much worse.
However, even this remains only an empirically founded belief – if I find evidence that revolutionary change has been better than I thought, I will reconsider working for revolutionary changes.- Effective altruists prefer to pursue systemic changes that are more likely to be achieved, all else equal. Sometimes we view existing attempts at systemic change as more symbolic or idealistic than realistic, and so push back against them. For example I wrote a post years ago about why it’s not a good use of time to work on US gun control. Of course this is nothing to do with systemic change specifically: we frequently also push back against non-systemic approaches that we don’t expect to help others very much. And I try to apply my pragmatism to the systemic changes that in my heart I would love to love: enthusiastic as I am about opening borders, it may be an impossible ask in the current political climate.
- We have been taking on the enormous problem of ‘how to help others do the most good’ and had to start somewhere. The natural place for us, GiveWell and other research groups to ‘cut our teeth’ was by looking at the cause areas and approaches where the empirical evidence was strongest, such as the health improvement from anti-malarial bednets, or determining in which careers people could best ‘earn to give’.
Having learned from that research experience we are in a better position to evaluate approaches to systemic change, which are usually less transparent or experimental, and compare them to non-systemic options. This is very clear from the case of the Open Philanthropy, which is branching out from GiveWell and is more open to high-risk and ‘unproven’ approaches like political advocacy than GiveWell itself.
This is a 10+ year old snapshot of the EA movement's efforts w.r.t. systemic change. I wouldn't be surprised if there's been much more since, e.g. some ACX grantees, some funders' funds, etc. I do think there's something to the critique, but I'd like to understand it better.
Ozy Brennan's Identifying healthy high-demand groups summarises takeaways from Abuses in the Religious Life and the Path to Healing, a book about spiritual abuse written by Dysmas de Lassus, the prior general (person in charge) of the Order of Carthusians. I've spent most of my life in high-demand groups of all kinds so this was interesting to read.
A high-demand group having a lot of people with good virtues isn't a sign it's healthy; toxic groups can have even more of these virtues:
There is no general pattern where people in healthy high-demand communities are, compared to people in toxic high-demand communities, more hardworking, generous, loving, self-controlled, courageous, honest, tolerant, clever, helpful, cheerful, or even compassionate (to outsiders).
Toxic high-demand communities often create a culture of competition to be the most ethical. ... So a toxic high-demand community can have more virtuous members and a greater positive impact on the world than a good high-demand community.
At the same time, toxic high-demand communities generally pervert genuine virtues... For Catholics, humility becomes self-hatred; the desire to give of yourself to others becomes complete self-denial; forgiveness becomes forgetting the crimes of unrepentant abusers. For rationalists and effective altruists, consequentialism becomes tolerance of wrongdoing because of some far-off future benefit; agency and taking ideas seriously become hearing a bad argument and doing arbitrary bad stuff because of it... The members of a toxic Catholic community may well come off as humble, self-sacrificing, and forgiving; the members of a toxic effective altruist community, consequentialist, agentic, and dedicated to self-improvement.
Actual signs of a high-demand group being healthy:
A healthy high-demand community is patient with its members. It doesn’t expect perfection immediately. It doesn’t hold people to unreasonable standards. It accepts that mistakes and failures are part of the human condition. ...
One of the most important green flags in a high-demand community is the personality of the high-status people. High-status people should readily admit that they make mistakes, believe wrong things, and have personality flaws. ... High-status people should be aware of the suffering of those around them, particularly suffering that’s related to the beliefs and commandments of the high-demand group. If possible, high-status people should do something to alleviate the suffering of group members; if not possible, they should provide comfort and understanding. Most of all, high-status people should be genuinely kind. Not righteous, not self-sacrificing, not heroic, not good. Kind. ...
Any high-demand community is going to make a lot of rules about how you live your life—it’s inherent to the enterprise—but a healthy high-demand community limits its rules to those matters which are really important... As much as is practicable, a good high-demand community allows you to make your own judgments about how to put rules into practice in your own life. ...
In particular, the community and the high-status people should encourage you to take independent initiative... High-status people should praise you for coming up with your own ideas and projects. They should provide help, especially help that isn’t too costly for them (such as making introductions or publicly announcing your project at a meeting).
The simplest criterion de Lassus lays out is the most powerful: are you happy?
Young me used to be confused when people asked "are you happy?" in relation to the high-demand groups I was in. How was personal happiness at all relevant to the collective mission, from which purpose derives? Later on I would meet plenty of excited members of high-demand groups, which was quite the update; there were in fact people in the "ideal" quadrant:
No one can promise you a life without suffering. Being part of a high-demand group may well make you suffer more. ... But ultimately, most of the time, if you’re part of a healthy high-demand group, you should feel a sense of peace and joy. You should reflect on your life, or at least those parts influenced by the high-demand group, and think you know, I’m glad I’m doing this. When it comes right down to it, I like the way my life is going.
Toxic groups are aware that people prefer to stay in groups that make them happy, so they Goodhart it. Often, a group will teach that not being happy is a sin, or that crying yourself to sleep is real joy, a deep and pure kind of joy that the uninitiated would mistake for misery. But, even if the group is gaslighting you, you can still tell how you feel. Take a few hours by yourself or with a trusted friend and reflect: how do I feel about my life? Is my life okay? Do I feel simple pleasures, such as appreciation of a sunrise or companionship with friends or satisfaction at a job well done? If I look back on the past year or two, do I feel a sense of contentment about how it went?
If you are persistently unhappy, the high-demand group may be toxic or it might be all right. But it is clearly wrong for you.
(related)
I anticipate that I'll pay my impact bills this way, but I'm not maximizing impact. I'm maximizing EA ideas.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it's along the lines of Richard Ngo's
I think “maximize expected utility while obeying some constraints” looks very different from actually taking non-consequentialist decision procedures seriously. In principle the utility-maximizing decision procedure might not involve thinking about “impact” at all. And this is not even an insane hypothetical, IMO thinking about impact is pretty corrosive to one’s ability to do excellent research for example.
Mechanics-wise it can take many forms, e.g. the FTXFF regranting program, THL's supported regranting, EA AWF, Giving Green, Manifund, etc (pretty sure you know all this! sharing for others too). Probably a committee structure already centralises things a bit much, contra the decentralisation + worldview diversification motivations behind regranting, although it may have its advantages from a capacity-building POV since centralisation enables specialisation.
Giving Green are welcome to correct me on this (I'm invoking Cunningham's law here) -- the impression I got from their strategy report is "this isn't that straightforward to answer"
For land use change, we looked at the surface area of land conversion avoided as a heuristic for scale, while weighing this against the importance of different biomes in terms of ecosystem services. For overfishing, we used heuristics such as the harmfulness and extent of fishing practices and the percentage of fish caught.
Because biodiversity is an inherently multi-dimensional and non-fungible good,
comparing specific impact strategies in terms of scale is methodologically challenging.
For this reason, we prioritized among the five direct drivers of biodiversity loss, for which
quantitative comparisons in scale exist.
Ecosystem services: While there are many indicators for biodiversity, we decided
to focus on ecosystem services as the biodiversity indicator to prioritize. Since
ecosystem services describe the benefits that humans derive from ecosystems,
this definition is closest to Giving Greenʼs mission to maximize human and
ecological well-being. In practice, there are often no concrete quantitative
indicators of ecosystem services for the strategies that we evaluate, so we often
rely on heuristics from other indicators.Co-benefits for humans and animals: In principle, co-benefits for human health,
development, and animal welfare were not part of our prioritization, but we took
care to not recommend strategies that may cause harm.
Why not just pick number of species going extinct? My guess is the argument across these scattered quotes from Founders Pledge's guide to ecosystem philanthropy, cited in GG's report:
... while biodiversity is a useful measure of an ecosystem’s organizational and structural health, it should not be used as the sole objective to maximize for philanthropists who are interested in the protection of ecosystems more broadly. Intuitively one might argue that conservation should maximize the world’s biodiversity as the abundance and diversity of species are worth protecting. However, using biodiversity as its own metric runs into various issues (see Brennan and Lo 2022 for a further overview). ...
Biodiversity vs wilderness
The focus of protecting and restoring ecosystems is to preserve the natural state of an ecosystem before human interference. This focus on wilderness (or naturalness) often comes in conflict with pure biodiversity maximization. For example, in arid ecosystems, human use can bring about higher biodiversity: a farm built in a desert landscape will provide more habitat for species than the original ecosystem did. Similarly, Brennan (1988) describes temperate forests in which limited land clearing increases the diversity of tree species. A response to this critique might be that, at least among conservation projects, one should choose those that most guard against biodiversity loss. However, even in this limited case, biodiversity is just one consideration among many. Many areas that are regarded as important to conserve, such as many US national parks, are generally lower in biodiversity and instead prized because they are deemed aesthetic or sublime (Sarkar 2005). As such, a primary focus on biodiversity would likely rule out many ecosystems widely deemed important to conserve and could even suggest actions that would go against the preservation of natural ecosystems. Rather, biodiversity should be one consideration among many.
Biodiversity vs Ecosystem Vigor & Services
Many of the most productive ecosystems are not very species-rich. Similarly, the ecosystems that provide the most services for humans are on average lower in biodiversity (such as salt marshes for water filtration). As such, a focus on biodiversity alone might lead to prioritizing ecosystems that are high in different species but are not vibrant in the sense that they contain relatively few ecological processes or provide few services for humans (Brennan and Lo 2022). Biodiversity is therefore best understood as an element of ecosystem health rather than its own metric based on which to prioritize. ...
Heuristics for prioritization
There are various metrics in the conservation field that one might use as heuristics for prioritizing. Biodiversity, the abundance and diversity of species, for example, has intuitive appeal. However, as described above, it often runs counter to other considerations such as wilderness/naturalness and the functioning of ecosystems. Another metric, ecosystem health, provides a more holistic framework to measure the ecological integrity of ecosystems. Prominent models focus on the organization and functional structures of ecosystems and their resilience. As such, they capture a more complete range of ecosystem integrity, and align more closely with the popular notion of “protecting vibrant ecosystems”. Philanthropists should focus on interventions that look at ecosystems holistically, aiming to preserve their structure, functioning, and resilience as opposed to focussing on singular metrics such as biodiversity maximization.
also re: your 2nd question, FP's guide also has a short section on how they're "uncertain about any particular prioritization (of ecosystem protection based on reducing animal suffering) at this point", due to population ethics dilemmas and uncertainty over whether wild animal lives "are on net lives of suffering", and I'm guessing GG's report implicitly adopts this stance.
Thought to share some infographics on animal advocacy org expenses from the Stray Dog Institute's 2024 State of the Movement report, which I learned about via Moritz's excellent post.
Most org spending is in North America and Europe:
North American and European orgs accounted for most of the spend in sub-Saharan Africa and LATAM & the Caribbean, despite spending (say) only ~1% of their total expenses in SSA:
I don't have any good sense of how this Global North-dominated funding potentially skews priorities, but this drill down by animal category may be a start:
As well as this drill down by intended outcome. Naively it seems that SSA's allocation looks like North America's for instance, except that the latter has a greater proportion of org spending going to increasing availability of animal-free products, which makes sense given relative wealth:
For what it's worth, here's what the funding allocations look like for animal categories as a whole: mostly terrestrial animals, mostly farmed.
I'd be keen to get takes from folks in the know on what seems underfunded here. Farmed insects jump out: just $135k out of $260m overall (~0.05%) seems nuts.
I also wonder about the skewing of priorities due to outside funding. Moritz wrote
Why this could matter:
- Strategy and local context
Money shapes movements. It affects which projects get tried, which organisations survive, what gets measured, and what kinds of risks are acceptable. If most capital comes from a different region, that may (subtly, unintentionally) shape priorities and assumptions. Some strategic questions are deeply context-dependent (politics, culture, institutional incentives, reputational dynamics). Local donors may bring different assumptions and highlight different opportunities or overlooked risks.
which I agree with; another angle is Tom & Karthik's point that
- Most farmed animals live in low- and middle-income countries (LMICs), but traditional Western animal advocacy tactics often fail there due to fragmented supply chains, informal markets, and weak enforcement.
- Creating meaningful change for these animals requires exploring both building infrastructure for change and trying alternative pressure points like farmer cooperatives and local institutions.
although it also isn't clear to me from the infographics above whether meaningful change in their sense would be reflected in the drill downs.
My go-to diagram for illustrating your point, from (who else?) Scott Alexander's varieties of argumentative experience:
[Graham’s hierarchy of disagreements] is useful for its intended purpose, but it isn’t really a hierarchy of disagreements. It’s a hierarchy of types of response, within a disagreement. Sometimes things are refutations of other people’s points, but the points should never have been made at all, and refuting them doesn’t help. Sometimes it’s unclear how the argument even connects to the sorts of things that in principle could be proven or refuted.
If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this:
Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability.
If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them.
If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect.
More on the high-level generators of disagreement (emphasis mine, other than 1st sentence):
High-level generators of disagreement are what remains when everyone understands exactly what’s being argued, and agrees on what all the evidence says, but have vague and hard-to-define reasons for disagreeing anyway. In retrospect, these are probably why the disagreement arose in the first place, with a lot of the more specific points being downstream of them and kind of made-up justifications. These are almost impossible to resolve even in principle. ...
Some of these involve what social signal an action might send; for example, even a just war might have the subtle effect of legitimizing war in people’s minds. Others involve cases where we expect our information to be biased or our analysis to be inaccurate; for example, if past regulations that seemed good have gone wrong, we might expect the next one to go wrong even if we can’t think of arguments against it. Others involve differences in very vague and long-term predictions, like whether it’s reasonable to worry about the government descending into tyranny or anarchy. Others involve fundamentally different moral systems, like if it’s okay to kill someone for a greater good. And the most frustrating involve chaotic and uncomputable situations that have to be solved by metis or phronesis or similar-sounding Greek words, where different people’s Greek words give them different opinions.
You can always try debating these points further. But these sorts of high-level generators are usually formed from hundreds of different cases and can’t easily be simplified or disproven. Maybe the best you can do is share the situations that led to you having the generators you do. Sometimes good art can help.
The high-level generators of disagreement can sound a lot like really bad and stupid arguments from previous levels. “We just have fundamentally different values” can sound a lot like “You’re just an evil person”. “I’ve got a heuristic here based on a lot of other cases I’ve seen” can sound a lot like “I prefer anecdotal evidence to facts”. And “I don’t think we can trust explicit reasoning in an area as fraught as this” can sound a lot like “I hate logic and am going to do whatever my biases say”. If there’s a difference, I think it comes from having gone through all the previous steps – having confirmed that the other person knows as much as you might be intellectual equals who are both equally concerned about doing the moral thing – and realizing that both of you alike are controlled by high-level generators. High-level generators aren’t biases in the sense of mistakes. They’re the strategies everyone uses to guide themselves in uncertain situations.
(also related: Value Differences As Differently Crystallized Metaphysical Heuristics and the previous essays in that series)
Regarding your "something clearly rational here that's kinda unintuitive to get a grip on", I think of it as epistemic learned helplessness as a "social safety valve" to the downside risk of believing persuasive arguments that can (potentially catastrophically) harm the believer, cf. Reason as memetic immune disorder.
There's a lot more to the study of disagreement if you're keen, shame it's mostly just one person working on it and they're busy writing a book nowadays.
AI race accelerant -> shorter timelines -> riskier for everyone (not just EA) was my read.