Hide table of contents

GiveDirectly has a huge space for additional funding and spends the money effectively. It's processes are simple and most poeple can see it's helping.

What is the longtermist equivalent of this?

What is the organisation you give to to improve the longterm future, with these critera, which can sit as the organisation of last resort to donate to?

This question was brought to you by me stealing Ben Todd's tweets and turning them into questions. https://twitter.com/ben_j_todd/status/1459196519924604928

Please write one suggestion per answer.

New Answer
New Comment


13 Answers sorted by

This is probably dumb, but I wonder if there are precious metal bottlenecks (like U-235 for nukes and platinum for cars) for specialized deep learning hardware/GPUs/computers that you can buy mines etc of (and then shut down) to slow down AI progress slightly. 

I highly doubt this is the most leveraged thing you can do, but this seems like it can scale to arbitrary amounts of dollars invested, certainly much more money than EA currently has access to.

It also makes more sense in a worldview where AI is "the only game in town" and less for a worldview where you think e.g. climate or bio or nukes or broad longtermism is >0.1x as important as AI.

Patient Philanthropy Fund 

Edited* 

Like GD it gives agency to the people who are suffering to make their own decisions. It has small overheads and could take far larger sums than most other spaces.

Unlike GiveDirectly, which is solving the problem in itself, the PPF kicks the can down the road. 

 *This was originally a placeholder - I was going to bed so said that someone should else write this as a proper answer. But everyone upvoted my placeholder anyway. [Thanos voice] "Fine, I'll do it myself"

I'm not sure this meets the 'spends the money effectively' criterion - it might, but we don't really know that yet. 

I guess this one feels most obviously analogous, since you can in principle just keep throwing money into a patient philanthropy fund (not saying you should).

2
Nathan Young
But can we think of a better suggestion?  

Genomic mass screening of wastewater for unknown pathogens, as described here:

[2108.02678] A Global Nucleic Acid Observatory for Biodefense and Planetary Health (arxiv.org)

A few test sites can already help to detect a new (natural or manmade) pandemic at an early stage. Nevertheless, there is room for a few billion dollars if you want to build a global screening network.

Unfortunately, I do not know if there is any organisation with need for funding working on this. 

Here's Will MacAskill at EAG 2020:

I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option.

Command-f for the full context on this.

"[Climate change interventions are] just so robustly good, especially when it comes to what Founders Pledge typically champions funding the most: clean tech. Renewables, super hot rock geothermal, and other sorts of clean energy technologies are really good in a lot of worlds, over the very long term — and we have very good evidence to think that. A lot of the other stuff we're doing is much more speculative. So I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option."

But then this might be a bit outdated now (see Good news on climate change ). 

The climate change scenarios that EAs are most worried about are tail-risks of extreme warming, in comparison to GiveDirectly's effects which seem slightly positive in most worlds. And while the best climate change interventions might be robustly not-bad, that's not true for the entire space. Given the relatively modest damage in the median forecasts (e.g. 10% counterfactual GDP, greatly outweighed by economic growth) many proposals, like banning all air travel, or anti-natalism, would do far more harm than good. Will suggests that climate change policies are robustly good for the very long term growth rate (not just level), but I don't understand why - virtually all very long-term growth will not take place on this planet.

Vaccines

 (Originally suggested by Ben Todd)

@CEPIvaccines a $3.5bn program to develop vaccines to stop the next pandemic

Cons

  • a cause area in itself

Yeah, some biosecurity stuff seems relatively shovel-ready. The whole "Sentinel" program of pandemic defense, as implemented in the $65 billion Biden proposal, could in a worst-case scenario be implemented at least partially by philanthropic dollars if the US government fails to see the light. (The Biden proposal has sadly now disappeared from the bills under debate... it seems the hope is that this might resurface as bipartisan legislation later.) Or for much higher cost-effectiveness, we could start an organization to advocate for versions of the Sent... (read more)

Some relatively measurable, concrete megaprojects that might help the long-term future (super-high cost effectiveness not necessarily guaranteed):

  • Backing up civilization by constructing giant underground bunkers and the like.

  • Backing up civilization by colonizing Mars or other space efforts. (So far, SpaceX has been very successful as a for-profit business, but maybe there are aspects of space colonization where philanthropy could help. I also think a serious bunker-building operation, with higher standards than most bunker construction today, could potentially be run as a profitable business selling to governments, prepper billionaires, etc.)

To the extent that long-run economic growth is a longtermist goal and isn't totally overshadowed by X-risk, there are lots of ways we might encourage a faster pace of economic development:

  • Lobbying for generally neoliberal pro-growth policies, including things like free trade, immigration, lighter regulation especially in cost-diseased sectors like healthcare and housing, etc. (Although like all politics, this is controversial and does not have the "everyone agrees it is helping" property of Givewell charities.)

  • Experimenting with promising new economic systems like quadratic funding of public goods, Georgist land taxes, etc. Trying to improve governance, social technology, and institutional decision-making by supporting things like prediction markets and charter cities. (Although experimentation is necessarily a "hits-based" business, thus doesn't have the guaranteed-impact aspect of Givewell charities).

  • Trying to advance scientific fields either by directly funding promising areas, or by trying to improve the functioning of academia and science grantmakers as institutions. (Although the opportunities here are probably less scalable than elsewhere.)

It would be great if we had some way of putting money towards reducing "existential risk factors" -- if improving US-China relations was as straightforward as buying carbon offset credits, I think that would attract a hell of a lot of funding.

Could you break this up into separate comments for each idea please.

Paying AI researchers to do slightly useful alignment research (or even nothing) rather than advancing capabilities.

This easily scales up to the low tens of billions per year, at which point it turns into acquiring startups. In the low hundreds of billions per year, one could even acquire significant stakes in Nvidia/Google/etc., but this seems horrendously inefficient.

GiveDirectly is about giving resources to other present people, even though they will not use it in a very targeted manner. The obvious analogy for the future is to save/invest money, which very slightly accelerates economic growth and transfers resources to future people, even if not in a very targeted manner. 

It's not a great option; we should be able to do a lot better. But it does seem roughly equivalent to GiveDirectly, which is also not a great option.

I don't think there is one

I think I'd like some reasons why you think this. Because it's a very easy thing to think and it could well be wrong.

This is definitely true if "it's processes are simple" is a requirement for the longtermist organization in question. Influencing the far future is extremely difficult, and can't even remotely be called a simple process. There definitely are longtermist organizations that have a lot of room for funding and spend their money effectively, though.

GiveDirectly

If you want to improve the longterm future, you do so by helping lots more poeple contribute now. If GiveDirectly was funded significantly, there would be huge economic gains and many geniuses who can't contribute would be able to. If you paid people enough, they might be attractive prospects as immigrants, and hence find it easier to move out of/adapt to climate disaster zones.

Cons:
- very wide focus (at least in a neartermist sense GD gives to those suffering most who money can affect most. in a longtermist sense that's harder to justify)

The Effective Altruism Foundation seems to satisfy some of your criteria. They have a variety of projects they are working on, all of which seem to be longtermism-oriented. Brian Tomasik has also endorsed it.

How would they use half a billion dollars per year? I think that's the kind of scale we're thinking of.

Placeholder EA infrastructure fund

Please write a better version of this answer then I'll delete mine.

Immigration reform

Allowing people to move more freely allows them to adapt better to climate pressure becuase they have their best interests at heart.

cons:
- This feels more like a narrow cause area than a wide bucket like GD

Comments1
Sorted by Click to highlight new comments since:

Downvote me to deprive Nathan of karma.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal