TLDR: Eventually, we want higher and higher levels of worthwhile delegation. This will take work.

Rigor: Very quickly written rant. I want to experiment with getting more pieces like these out there, as opposed to focusing more on polish. Feedback is appreciated.

 

Example 1: Bricks

  1. Low delegation: You assemble bricks for your house
  2. Medium: You hire a team to put together your house
  3. High: You tell an assistant that you want a house. They go about choosing a plan and selecting a team.
  4. Very high: You tell an assistant to “make your life better”, they decide that a house is the best approach, and then they go about figuring it out.

Example 2: Charity

  1. Low delegation: You give food to a homeless person
  2. Medium: You donate to a nonprofit feeding the homeless
  3. High: You donate to a nonprofit fund that re-grants to nonprofits that help the homeless
  4. Very high: You donate to a fund to “do useful stuff”, and they sometimes decide that helping the homeless is ideal.

If delegation can be well managed, it’s incredibly useful. There’s always more and more stuff to do and be concerned about. We clearly want to hand off as much work as possible to others and then for those tasks to achieve economies of scale.

Those who can delegate well, often win. This is exactly the job of CEOs/entrepreneurs/managers. Companies with great teams that can be given bold projects with little oversight, or nations with very low political corruption or infighting.

Right now in the US, we don't trust the government much, so it's relegated to a low delegation level. This means it just simply can't do that much.

The higher level you delegate, the more agency you (typically) sacrifice. When you choose a house construction crew, you’ll have a bit less control over your brick types. When you choose some highly-meta-nonprofit-fund, you give them incredibly broad authority to handle your money.

Typically delegation breaks down somewhere between levels 1-4. Sometimes this is due to undertrust; you expect the authority will be corrupt or poor, even if they won’t be. Sometimes it’s overtrust; you mistakenly expect things to go well, then they abuse or misuse the agency.

To work well:

  1. There needs to be significant amounts of justified trust between client and provider
  2. The costs of delegation must be lower than the benefits

Most altruists seem to be in either the “medium” levels, or the unjustified “very high levels” (for example, donating to a single politician or cult to fix all of one’s problems). Some are in the low levels and just go out and try to help people personally (with widely mixed results). 

On one hand, you have literal cult leaders, and on the other, you have millionaire altruists literally laying bricks to build houses. 

Right now in EA, we have some “high” levels of funding delegation (EA Funds, GiveWell), but it's limited.

There’s a ton of value at hand on moving towards more delegation and increasing the amount of justified trust. “Everyone trying to do all of the necessary analysis on all things” themselves clearly doesn’t scale, and it should be clear that very few people are strong at this.

I don’t see that much literature at this level of abstraction, but I often see arguments like,

“We can’t trust the authorities”

“It’s important to think for yourself”

I get that, but it seems like it’s arguing on the wrong axis. We really want to push forward the Pareto frontier of delegation potential. That’s much more important, in the long run, than small decisions of where on that frontier we want to be right now. The frontier right now is bad and I think we could do much better.

Comments11


Sorted by Click to highlight new comments since:

Very true. One of the things that makes good delegation hard is its increasing potential for corruption.
While I don't worry much about corruption inside EA for now, this seems to be a significant problem for society at large? I wonder if there are culture-independent patterns for what low-corruption societies look like 🤔

If I try to inquire myself for why I donate directly to GiveDirectly instead of donating to an EA Fund, something that comes up is a desire for "control" and "defensibility". In an imaginary conversation, I can justify why I give large sums of money to GiveDirectly and why it is a Good Thing To Do to efficiently give money to the very poor. OTOH, giving money to an EA Fund feels much more amorphous and much harder to explain and justify what is happening with it and why it is a good idea.

Thanks agree that corruption is a big problem for society at large. At the same time though, with some work, we can make sure that groups are not very corrupt. My intuition is that a great deal of competitive markets have very low corruption; I’d expect that Amazon runs pretty effectively, for instance. I think we can aim for similar levels in our charity delegation structures. It will take some monitoring/ evaluation/transparency, but it definitely seems doable.

My impression is that many groups that complain about corruption actually do fairly little to actively try to remove corruption. When good and agentic CEOs want to stamp it out, they often do.

With GiveDirectly/ EA Funds, I’m not arguing that right now one is better than the other (as I’m guessing you realized, but I’m not sure about other readers). My main point is that we should be aiming for a future that leans more in the direction of “a really solid EA Funds”. That would help in so many ways.

(Note that if we want more minds on the topic, we can achieve they at the same time, with something more like a Crypto DAO, or a version of EA Funds that takes a lot of user contributions for research)

If you have ideas of what you’d want in EA Funds (or maybe GiveWell’s general fund), those would be interesting.

I also donate directly to charities I choose, looking at recommendations from GiveWell, rather than delegating to EA Funds / GiveWell.

Reasons for delegating:

-better coordination

-they might have better/more up-to-date empirical information about the kinds of charities that match my values

Reasons for not delegating:

-money gets to recipient charity faster (monthly, not quarterly)

-less bank fees

-funds almost certainly won't match my values exactly

-I can double-check their work (I still think some of the deworming assumptions are absolutely ludicrous)

Ambiguous:

-I'm nervous about a fund finding a new opportunity and suddenly leaving a charity with a large funding gap, crippling a very good charity. Ideally this would be solved by the fund phasing over to the new opportunity slowly. In practice it can also be solved by individual donors taking a long time to move to new recommendations (or not moving).

For what it's worth, I think the best reason not to delegate is something like:
"Funding work is hard, and funders have limited time. If you can do some funding work yourself, that could basically contribute to the amount of funding work beign done." (This works in some cases, not for others)

> I'm nervous about a fund finding a new opportunity and suddenly leaving a charity with a large funding gap, crippling a very good charity.

I think that funding work is a lot more work than just making yearly yes/no statements that are in-isolation ideal. There's more to do, like communicating with charities and keeping commitments. 

In theory, a cluster of medium to large funders could do a better job than individual donors, but that's not always the case.

A few quick meta comments: It feels like this level of polish is sufficient for getting some people to read the post to begin with. The alternative would be to put a lot of time into creating an engaging, compelling post building your idea, but I don’t actually have a good sense of how much better that would be than simple, conversational tone and brevity you used. The epistemic status note at the top was helpful. 

On the other hand, I suspect that almost none of your readers will actually do anything based on this. You probably want to put more effort into making the suggested action easy and compelling if you want to get people to do something.

On net, I vote for more quick posts like this. 

Thanks!

You probably want to put more effort into making the suggested action easy and compelling if you want to get people to do something.

I'm interested in arguing/discussion for buy-in that our community should strive to eventually have strong, trustworthy, high-delegation groups. I'm not sure amenable this is with straightforward actions right now.

Like with much of my more theoretical writing, I see it as a few steps before clear actions.

I think the opposite direction is more important to achieving the same goal - that is, we should get better at being able to robustly justify when delegation is done well or poorly. This seems both more tractable, and a way to decide whether the strategy of delegation is working in a given case.

For example, right now trust in government is low, but so is the ability for government to justify that it is doing a good job. And because they can't justify how well they are doing, there is no reason  for them to try very hard to do it well. The current infrastructure for this is mostly levels of bureaucracy, preventing rule breaking, but not only does it only help with a very small part of "did you do delegation well," it imposes high costs that make doing delegation well nearly impossible.

Anyways, IIDM.

I think I agree with everything there; but I'm unsure what you mean exactly when you say "opposite direction"

I was arguing that our abilities for delegation should improve. If we had better accountability/transparency/reasoning-at-what-level-is-optimal, that would help improve our abilities to do delegation well.

I wasn't arguing that we delegate more right now; but rather, that we should work on improving our abilities to delegate well, with the intention of delegating more later.
 
(I changed the title to try to make this more clear)

Yeah, I wasn't really disagreeing, just pointing out that the way to get there seems to be to improve proof-of-alignment  - which is not really accountability, since that's used to talk about not cheating rather than "trying hard to do what they want"

I found the change in title confusing as there wasn't really any discussion of how to actually improve our delegation abilities in the post, and more just encouragement to delegate more. You mentioned some ways in this comment (accountability, transparency...) but they're not really unpacked in the main post. Would be interested in a discussion unpacking these and other ways.

Noted, thanks!

I was trying to explain a framework; the resulting strategy is something like:
1. Improve delegation abilities
2. Delegate more in the future, accordingly.

You're totally right I didn't get into how to do this. 

Do you (or others reading this) have ideas for what the post should have been called? It already was sort of long, I wasn't sure about a good tradeoff (I tried a few variations, and didn't really like any of them)

It's too late to change now, but I can try to do better in the future

More from Ozzie Gooen
82
Ozzie Gooen
· · 9m read
Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Policy
20
Eva
· · 1m read