Hide table of contents
  1. FTX, a big source of EA funding, has imploded.
  2. There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.
  3. There's been a big drop in the funding that EA organisations expect to receive over the next few years.
  4. Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.

Which revisions are most pressing?

37

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

A reread of 'Judicious Ambition' post from not so long ago is interesting:

"In 2013, it made sense for us to work in a poorly-lit basement, eating baguettes and hummus. Now it doesn’t. Frugality is now comparatively less valuable."

So, I guess, bring the hummus back?

Jokes aside, an explosion in funding changed EA from 'hedge fund for charity' into 'VC for charity'. This analogy goes a long way to explain shifts in attitude, decisions, exuberance. So perhaps going back to hedge-fundiness, and shifting the focus back from 'company builders' building the next big thing to less scalable but cost-effective operations is a good direction?

imo EA should have remained frugal.
 

For theoretical reasons, this makes sense. It's incompatible with Singerite alturism to spend money on frivolous luxuries while people are still starving. EAs were supposed to donate their surplus income to GiveWell. This doesn't change when your surplus income grows. At least, not as much as people behaved.

Also for practical reasons. We could've hired double the researchers on half the salary. Okay maybe 1.25x the researchers on 80% the salary. I don't know the optimal point in the workforce-salary tradeoff but EA definitely went too far in the salary direction.

The result was golden handcuffs, grifters, and value drift.

Let's bring back Ascetic EA. Hummus on toast.

As someone who (briefly) worked in VC and cofounded nonprofits, I'm not sure that's a good signal.

"VC for charity" makes more sense when you consider that VC focus on high upside, diversification, lower information and higher uncertainty, which reflects the current stage of the EA movement. EA is still discovering new effective interventions, launching new experimental projects, building capacity of new founders and discovering new ways of doing good on a systemic level. Even today, there's an acknowledgement that we might not know what the most cost-effec... (read more)

EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

  1. Increase the number of donors:
    1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.
    2. Maybe lobby the government for cash?
    3. Maybe lobby OpenAI, DeepMind, etc for cash?
  2. Increase average donation:
    1. Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.
    2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.
    3. Get existing donors to ramp up their donations. In the good ol' days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.
  3. Reduce the number of grants:
    1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.
    2. EA has been engaged in a "hit-or-miss" approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.
    3. The problem with the "sure-thing" approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

      Here's a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.
  4. Reduce average grant:
    1. Maybe EA needs to focus on cheaper projects.
    2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI's decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.
    3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

For Open Philanthropy it is responded in this post:
https://forum.effectivealtruism.org/posts/mCCutDxCavtnhxhBR/some-comments-on-recent-ftx-related-events

For other funders I guess the response will be simmilar.

I guess a short common sense answer for funders could be:
1st) put commitments on hold and wait until there is more clarity of the actual impact
2nd) identify gaps, assess by urgency/importance
3rd) reprioritize and balance portfolios

For workers and organizations relying on donors:
1st) do not assume you will not be impacted if you don't receive directly funds from FTX. The money you were relying on may be redirected in the future to other projects previously funded by FTX
2nd) put on hold financial decisions (hiring staff, buying, etc.) in the short term until you get a bit more clarity
3rd) reorganize your personal/organization budget

Comments3
Sorted by Click to highlight new comments since:

EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

  1. Increase the number of donors:
    1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.
    2. Maybe lobby the government for cash?
    3. Maybe lobby OpenAI, DeepMind, etc for cash?
  2. Increase average donation:
    1. Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.
    2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.
    3. Get existing donors to ramp up their donations. In the good ol' days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.
  3. Reduce the number of grants:
    1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.
    2. EA has been engaged in a "hit-or-miss" approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.
    3. The problem with the "sure-thing" approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

      Here's a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.
  4. Reduce average grant:
    1. Maybe EA needs to focus on cheaper projects.
    2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI's decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.
    3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.

 

It's a process to recruit billionaires/turn EAs into billionaires, but one estimate was another 3.5 EA billionaires by 2027 (written pre FTX implosion). In the analyses I've seen for last dollar cost effectiveness, they have tended to ignore the possibility of EA adding funds over time. Of course we don't want to run out of money just when we need some big surge. But we could spend a lot of money in the next five years and then reevaluate if we have not recruited significant additional assets. This could make a lot of sense for people with short AI timelines (see here for an interesting model) or for people who are worried about the current nuclear risk. But more generally, by doing more things now, we can show concrete results, which I think would be helpful in recruiting additional funds. I may be biased as I head ALLFED, but I think the optimal course of action for the long-term future is to maintain the funding rate that was occurring in 2022, and likely even increase it.

On the grants side of your formula, there are huge differences in flexibility between different projects. The direct cash transfers of Give Directly can scale up and down very rapidly.

On the donors’ side of your formula, it is not only about size but also volatility and reliability. There are big donors with stable wealth and a track record of regular predictable donations.

In my mind a sensible overall allocation would have at least as much money going to very flexible projects (ex: direct cash transfers) as the amount of money coming from very unpredictable sources (ex: one big donor whose wealth coming from risky assets varies a lot every week). This would capture the high rewards of volatile donors, without putting so much uncertainty to the teams who need some stability over the time.

For sure, this is always under the assumption that all donors, big or small, predictable or volatile, meet a minimum ethical standard in their practices.

Curated and popular this week
 ·  · 2m read
 · 
I can’t recall the last time I read a book in one sitting, but that’s what happened with Moral Ambition by bestselling author Rutger Bregman. I read the German edition, though it’s also available in Dutch (see James Herbert's Quick Take). An English release is slated for May. The book opens with the statement: “The greatest waste of our times is the waste of talent.” From there, Bregman builds a compelling case for privileged individuals to leave their “bullshit jobs” and tackle the world’s most pressing challenges. He weaves together narratives spanning historical movements like abolitionism, suffrage, and civil rights through to contemporary initiatives such as Against Malaria Foundation, Charity Entrepreneurship, LEEP, and the Shrimp Welfare Project. If you’ve been engaged with EA ideas, much of this will sound familiar, but I initially didn’t expect to enjoy the book as much as I did. However, Bregman’s skill as a storyteller and his knack for balancing theory and narrative make Moral Ambition a fascinating read. He reframes EA concepts in a more accessible way, such as replacing “counterfactuals” with the sports acronym “VORP” (Value Over Replacement Player). His use of stories and examples, paired with over 500 footnotes for details, makes the book approachable without sacrificing depth. I had some initial reservations. The book draws heavily on examples from the EA community but rarely engages directly with the movement, mentioning EA mainly in the context of FTX. The final chapter also promotes Bregman’s own initiative, The School for Moral Ambition. However, the school’s values closely align with core EA principles. The ITN framework and pitches for major EA cause areas are in the book, albeit with varying levels of depth. Having finished the book, I can appreciate its approach. Moral Ambition feels like a more pragmatic, less theory-heavy version of EA. The School for Moral Ambition has attracted better-known figures in Germany, such as the political e
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Sarah Cheng
 ·  · 2m read
 · 
TL;DR: The EA Opportunity Board is back up and running! Check it out here, and subscribe to the bi-weekly newsletter here. It’s now owned by the CEA Online Team. EA Opportunities is a project aimed at helping people find part-time and volunteer opportunities to build skills or contribute to impactful work. Their core products are the Opportunity Board and the associated bi-weekly newsletter, plus related promos across social media and Slack automations. It was started and run by students and young professionals for a long time, and has had multiple iterations over the years. The project has been on pause for most of 2024 and the student who was running it no longer has capacity, so the CEA Online Team is taking it over to ensure that it continues to operate. I want to say a huge thank you to everyone who has run this project over the three years that it’s been operating, including Sabrina C, Emma W, @michel, @Jacob Graber, and Varun. From talking with some of them and reading through their docs, I can tell that it means a lot to them, and they have some grand visions for how the project could grow in the future. I’m happy that we are in a position to take on this project on short notice and keep it afloat, and I’m excited for either our team or someone else to push it further in the future. Our plans We plan to spend some time evaluating the project in early 2025. We have some evidence that it has helped people find impactful opportunities and stay motivated to do good, but we do not yet have a clear sense of the cost-effectiveness of running it[1]. We are optimistic enough about it that we will at least keep it running through the end of 2025, but we are not currently committing to owning it in the longer term. The Online Team runs various other projects, such as this Forum, the EA Newsletter, and effectivealtruism.org. I think the likeliest outcome is for us to prioritize our current projects (which all reach a larger audience) over EA Opportunities, which
Recent opportunities in Building effective altruism
34
cescorza
· · 2m read