Summary: It is possible that effective altruism misses out on pursuing higher impact courses of action, backing more impactful organizations, and/or recommending better career paths to individuals. Two key contributing factors may be: (1) paying insufficient attention to the relative amount of influence EA has relative to other global actors and how to increase relative influence and (2) focusing on activities that are backed by academic research instead of more broadly focusing on activities that reasoning/EV estimates suggest would be higher impact than academic research–backed activities. A broader issue is that EA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the community.
Introduction
Several times a year, the St. Jude Children's Research Hospital spends more than the effective altruism movement has allocated to good causes in its entire lifetime, including the Open Philanthropy Project's disbursements. Samasource has lifted tens of thousands of people out of poverty with a self-sustaining model that, unlike GiveDirectly, is completely unreliant on continual donor funding, providing a tremendous multiplier on top of the funds that were initial used to establish Samasource. And Kevin Briggs, a California Highway Patrol officer, singlehandedly saved more than 200 people from jumping off the Golden Gate Bridge over the course of his career. These examples highlight potential issues of the Effective Altruism movement on a movement-wide, organizational, and individual level.
Movement-Wide
Is the EA movement on track to significantly change the world, or is it merely a very small group of actors making a very limited difference with an unclear future trajectory? If the answer is something along the lines of the latter, we should consider whether or not this is the most optimal way to proceed, give the resources at the movement's disposal.
The EA movement originally threw around the idea of earning to give, a concept which was later retracted as a key talking point in favor of theoretically more impactful options. But the fact that a movement oriented around maximizing impact started out with earning to give is worrying. Even if earning to give became popular with hundreds to thousands of people, which in fact ended up happening, the impact on the world would be fairly minimal compared to the impact other actors have. It is possible that the EA movement is not pursuing courses of action that could have a substantially higher impact than what is currently happening.
As an example issue, in terms of financial resources, the entire EA community and all of its associated organizations are being outspent and outcompeted by St. Jude's alone. Earning to give might not resolve the imbalance, but getting a single additional large donor on board might. If that was promoted when EA first started instead of earning to give, the movement could look completely different right now. Perhaps EAs would be fanning out at high net worth advisory offices to do philanthropic advisory instead of working at Jane Street. Perhaps EAs would be working as chiefs of staff for major CEOs to have a chance at changing minds. Perhaps the movement would conduct research on how Warren Buffet decided on the Bill and Melinda Gates Foundation instead of less optimal choices, and whether outreach, networking, or persuasion methods would be effective.
As another example, there apparently aren't enough high impact jobs to go around, but there are in theory billions of dollars available. How exactly is this possible? Certainly key EA organizations might want to have the best, super-mission-aligned individuals, which requires slow and careful hiring. But the vast majority of successful startups did not require staff that were perfectly motivated to, say, optimize freight logistics. It's a stretch to say that hundreds to thousands of EAs should be working at corporations instead of doing something better like direct work. There are multitudes of high impact activities that may not require small ultra-curated teams and can involve currently underutilized community members.
As a final example, EA is very weak compared to all of the other forces in the world in all relevant senses of the term: weak in financial resources, weak in number of people, weak in political power. This problem is why the world has problems in the first place, and why Nate Soares says he spent his college years designing a societal system that "ratchets towards optimality." Does it matter if we focus on theories to reduce certain types of major risks or if we are not the key decision makers behind when nuclear missiles are launched or how much the power the AI safety committee has in a company? Perhaps EA should consider acquiring more political power, media power, or other forms of power to have a greater impact.
The problems I have mentioned and potential alternative courses of action are merely ideas. Substantial strategic research and analysis is required to assess the current course of action and evaluate better courses of action. It's not clear to me why there has been such limited discussion of this and progress so far unless everyone thinks being financially outmatched by St. Jude's for the next 5+ years is an optimal course of action that does not require community strategizing to address.
Organizations
According to the "official" Introduction to Effective Altruism, EA is a "research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible." Ignoring the point of whether or not EA is most appropriately described as a "research field," in practice in EA there is a significant difference between using evidence versus using reasoning to work out how to maximize impact. Historically, EA has focused on backing "reputable registered tax-advantaged nonprofit organizations of moderate team and budget size that consistently pursue the same activity/activities for long periods of time, with all activities backed by research such as RCTs focused on interventions to improve health outcomes." But is this actually the right approach?
The vast majority of ventures, decisions, etc made in the world must be made with limited information for which there are no RCTs available. Samasource, for example, may very be orders of magnitude more effective per dollar of total lifetime donations than GiveDirectly. The longer Samasource runs a financially self-sustaining model, the better the impact per donor dollar will be. But Samasource was not started based on rigorous research. If we pretend it was never started and it sought funding from the EA community today to launch, Samasource may very well have gone unfunded and never have existed, which is a problem if it is actually comparably effective or more effective than GiveDirectly.
It is possible that there are a very large number of organizations is existence that have a much higher impact per dollar than top EA charities. It is also possible that we can work out with reasoning based on fermi estimates whether organizations have been more effective than top EA charities with reasonable confidence. We can certainly use fermi estimates to assess the potential impact of ideas, startups, and proposed projects. I expect that a relevant number of these estimates will have a higher expected impact per dollar than top charities. As an analogy, a small proportion of VC firms use decision analysis to determine the EV of startup investments, an approach that EA could also use. I am not aware if funding entities like EA Grants apply explicit quantitative models to estimate EVs and use model outputs for decision making.
It is possible that the EA community is applying suboptimal filters to decide what organizations to back. Perhaps a focus on financially sustainable interventions is superior, or perhaps backing early stage organizations has a higher EV and hence a higher impact. These approaches all rely on reasoning a lot more than scientific evidence, and that may turn out to be much more impactful.
Individuals
Like organization choice, EA may be recommending overly limited career/time choices to people in the movement.
For example, it is possible that strategically thinking about career impact is a superior option compared to common courses of action like directly working at an EA organization in operations or earning to give. Careers can have unintuitive but wonderful opportunities for impact. Kevin Briggs' career approach saved many more lives than a typical police officer, and amounted to the same general range of the number of statistical lives that can be saved with global health donations. The Introduction to Effective Altruism mentions the fantastic actions of Stanislav Petrov, Norman Borlaug, and others that saved a tremendous number of lives, each with a different career.
It is possible that becoming a doctor or high school health teacher could save a similar number of lives compared to Kevin Briggs, for instance if the doctor or high school health teacher was more effective than peers in promoting life-saving choices like smoking and other lifestyle changes across thousand of people they interact with in a lifetime. It may be possible to have a tremendous social impact in a large number of specialties from accounting, to dentistry, to product testing, simply by identifying scalable, sufficiently positive interventions within the field.
There may be expenditures of time that are not being sufficiently recommended. For example, learning CBT or decision analysis may be very high impact in addition to spending time reading books on EA and attending local groups. There also seems to be a lack of volunteer opportunities which, if solved, may have a big impact.
Conclusion
EA strategy may be an extremely important area to focus on because changes in strategy can have an enormous impact on the impact of EA over the next few years and moving forward. This post is my first attempt to get some of my preliminary thoughts on potential EA strategy shifts on paper, and I hope it encourages others to share their thoughts on potential optimizations or oversights of the movement as well.
My apologies for the extended delay in response! I appreciate the engagement.
Contrary to your assumption, I have a lot of information on EA, and I'm aware that the problems I'm pointing out aren't being implemented. There is likely a gap in understanding that is common in written communication.
This communication gap would be less of a problem if the broader issue "EA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the community" was addressed. As a specific example, where are the exact criticisms of longtermism or the precise strategies and tactics of specific EA organizations laid out? There should be a publicly available argument map for this, as a rudimentary example of what such a proposed system should look like. There's a severe gap in coordination and collective intelligence software in EA.
That was just an example of a charity that would have failed to be funded due to a combination of poor assumptions about the cause area/intervention type not being effective in 100% of instances and just presumably on average, failure to consider revenue-positive models, etc.
EA misses out an entire classes of strategies that are likely pretty good. Revenue-positive models, for one, for the most part. That's not to say there's not a single EA doing it, but there's a high lack of interest and support which is just as bad, since the community's mission is to advance high-impact efforts.
That's assuming that 100% of jobs/skills training programs are doomed to failure... kind of like assuming 100% of charities are doomed to be ineffective. But if we used that logic, EA wouldn't exist. Doing this analysis on a cause level and intervention type could be fundamentally problematic.
Yes, EAs could certainly do such a thing, which would be easier if entrepreneurship was more encouraged. With the philosophy of only a few career areas being identified as promising at any given time (they do shift with time, very annoyingly, there should be much less confidence about this) it is hard for people to even pursue this strategy, let alone get funding.
There's a lack of evaluation resources available for assessing new models that don't align with what's already been identified, which is a huge problem.
The existence of something within EA (a minority, for example) does not mean that it is adequately represented.
Using a Forum is a terrible mechanism for collective intelligence and evaluation (not to say this issue is unique to EA).
The method by which strategy shifts percolate is rather problematic. A few people at 80K change their minds and the entire community shifts within a few years, causing many people to lose career capital, effort spent specializing in certain fields, etc. This is likely to continue in the future. Ignoring the problem that shifts cause, the current career strategies being advocated are likely not optimal at all and will shift. The fix is to both reduce the cultural reliance on such top-down guidance as well as completely rethink the mechanism by which career strategy changes are made.
Again, the presence of some individuals and teams working on this does not mean it's the optimal allocation.
Those direct and indirect consequences should all be factored into a quantitative model for impact for certain careers, which 80K doesn't do at all, they merely have a few bubbles for the "estimated earnings" of a job, no public-facing holistic comparison methodology. Surprising given how quantitative the community is.
Yep, I mentioned influence in my post, but the question is whether this is the most optimal way to do that (along with all of the other things individual EAs could be doing)
Again, allocation objection. The fact that these keep springing up is rather remarkable. Are we far from diminishing returns, or past that already? Should all be part of movement strategy considerations done by a team and ideally dedicated resources combined with collective software.
Evidently jobs and the organizations that create them can be created if the movement so chooses.
Analysis of the various gaps and merits of such could be done, as a terrible example, with some sort of voting mechanism (not quite a prediction market) for various gaps.
I do not think this is true, given observations as well as people generally leaning away from novel ideas, smaller projects, etc.
I have concerns over movement growth, but anyways, the point was what it could have been, instead of what it is.
A problem that continues to this day, a complete lack of investment in exploratory research (charity entrepreneurship has recommended this as an area), new promising ideas and causes popping up all the time that go ignored, unfunded, etc due to many factors including lack of interest in whatever isn't being peddled by key influencers, etc.
Fix is to have a centralized system storing both potential problems and solutions in a manner anyone can contribute to or vote on. Maybe in 5–10 years the EA Forum will look like this, or maybe not, given the slow (but steady) pace of useful feature development.
This itself is the problem, and years later still no effort to fix collective knowledge and coordination issues.
Answer: pay people, but then no interest in doing so. Most funding decisions being made by a very small number of biased evaluators, biased in favor of certain theories of change, causes, larger organizations, existing organizations, interventions, etc. Thus this is the consequence, lots of people agreeing something should be done, no mechanism for this knowledge to become a funding and resourcing decision.
For all we know revenue-generating poverty alleviation models are vastly better... no existing thoughts to date exploring this idea which is one of many thousands of strategic possibilities.
While charity entrepreneurship is doing this at a problem/solution level, this isn't sufficient in the slightest, huge assumptions getting made. Sufficient resources exist for this to be done more robustly, no appetite to fund it.
No, cherry picking wasn't the point at all. This is a way to identify potential strategic opportunities getting missed out on (replicable models could succeed), backtest evaluation criteria (like was it truly impossible to identify this would have worked), etc.
Again, the fact that something is happening doesn't mean we've reached the optimal level of exploration. We're probably investing thousands of times less resources than we should.
No comment, but regardless, very biased based on certain evaluator opinions, lots of groupthink.
Hearing about what they say they're doing is pretty useless, does not map to what's actually happening.
People just listen to whatever 80K tells them, Holden, etc, rather than independently reaching career decisions. I don't care what 80K tells you do to, they tell people to do a very large number of things, we need to look at what's actually happening.
You're missing the point, you are using reasoning about averages. Your average officer likely doesn't save many counterfactual lives. What matters is the EV of your specific strategy, not the overall career. With Petrov, this wasn't predictable, but it could've been with that specific officer strategy. Whether the officer in question did that calculation in advance is irrelevant (they probably didn't think about that at all), the fact is that it could've been foreseeable.
Lots of EAs actually have these ideas and they're not listed anywhere. Right, outside analysis should be used to foresee these opportunities but not to change fields. No effort getting spent on doing this outside analysis for career areas considered "low impact." No awareness of or attempt to evaluate things like dental interventions that are vastly more effective than existing interventions for example. There are so many options, and they could all benefit from EV calculations.
It would require a big shift for EA to start doing these EV calculations for many areas. Again, a lot of work, but the people and money are here.
There needs to be this paradigm shift, followed by resource allocation, probably never gonna happen.
Tractability calculations should be case by case, not generalizable. And that's just one of many possibilities within accounting, none of which have any awareness or EV calculations. But yeah, that one might be hard.