Hide table of contents

Title in homage to Linch.

In the second half of 2022, I was a Manifund regrantor. I ended up funding:

  1. Holly Elmore to “[organize] for a frontier AI moratorium.” ($2.5k.)
  2. Jordan Schneider/ChinaTalk to produce “deep coverage of China and AI.” ($17.55k.)
  3. Robert Long to conduct “empirical research into AI consciousness and moral patienthood.” ($7.2k.)
  4. Greg Sadler/GAP organizational expenses. ($10k.)
  5. Nuño Sempere to “make ALERT happen.” ($8k.)
  6. Zhonghao He to “[map] neuroscience and mechanistic interpretability.” ($1.75k.)
  7. Alexa Pan to write an “explainer and analysis of CNCERT/CC (国家互联网应急中心).” ($1.5k.)
  8. Marcel van Diemen to build “The Base Rate Times.” ($2.5k, currently unclaimed.)

You can find my decisions and comments on grants on my profile. Here, I want to reflect on lessons learned from this wonderful opportunity.

I was pretty wrong about my edge

In my bio, I wrote:

To the extent that I have an edge as a regrantor, I think it comes from having an unusually large professional network. This, plus not having serious expertise in any particular area, makes me excited to invest in "people not projects."

I had previously ran a prestigious fellowship program where (by the end) I thought I was pretty good at selection. Successfully running an analogous selection process over people recommended from my wide network (this time for grants) seemed like it would transfer neatly. Austin, who co-runs Manifund, and who participated in my earlier program, seemed to agree on both counts.

I still believe the premises, and so remain hopeful that this could be an edge in future. But it was largely unimportant for my recent regranting experience. (Only the grant to Greg Sadler/GAP came out of asking my network for recommendations; only the grant to Robert Long came from private knowledge I would have had regardless of being a regrantor.)

I haven’t fully figured out why this was. My current best guesses are:

  1. What matters most for ‘deal flow’ is not having a talented network but in-person conversations (with people in a talented network). 2023 was perhaps my most socially isolated non-COVID year.
  2. A fraction of a $50k budget is not enough for the kinds of recommendations one might want from one’s network. I don’t hear about opportunities like “this great person should start that great organization” because these would require more than $50k.
  3. Recommenders aren’t naturally in the mode of looking out for nor dreaming up novel opportunities.
    1. Evidence in favor: Greg Sadler was recommended by someone who previously regranted to Greg Sadler.
    2. Perhaps I could have found a better way to get recommenders to change mode in conversations with me. Or perhaps this problem would fix itself if Manifund became better-known.

But I have been happy about my low-level strategy

Above the edge section of my bio, I wrote:

I plan on using my regranting role to optimize for "good AI/bio funding ecosystem" and not "perceived ROI of regrants I make personally." I think that this means trying to:

  • Be really cooperative behind the scenes. (E.g. sharing information and strategies with other regrantors, proactively helping Manifund founders with strategy.)
  • Post questions about/evaluations of grants publicly.
  • Work quickly.
  • Pursue grants that might otherwise fall through the gaps. (E.g. because they're too small, or politically challenging for other funders, or from somewhat unknown grantees, or from grantees who are unaware that they should ask for funding.)
  • Not get too excited about grants where (1) evaluation would benefit strongly from a project-first investment thesis (e.g. supporting AI safety agenda X vs. Y) or (2) the ideas are obvious enough that (to the extent that the ideas are good) I strongly expect others to fund them (e.g. career transition grants to IMO medalists).
  • Occasionally make small regrants as credible signals of interest rather than endorsements. (To improve speed, information, and funder-chicken dynamics.)
  • Encourage criticism of my thought processes and decisions from the Manifund community.

I still endorse these strategies. I think I was successful at deploying most of them. For example:

  1. I left critical comments that I hope others found helpful. (Judge for yourself e.g. herehere, and here.)
  2. My grants are well-characterized as things that might have otherwise fallen through gaps. 
    1. Holly Elmore and Jordan Schneider are difficult to support with philanthropic funds.
    2. I was responsible for Robert Long, Zhonghao He, and Alexa Pan asking for any funding.
    3. I think that ALERT and GAP had been (sort of, it’s complicated) passed on by Open Philanthropy.
  3. I didn’t fund some projects despite being excited by them, because I thought that others were in a better position to cheaply evaluate. These projects were all funded by grantmakers more expert than me. (E.g. Lawrence Chan.)

And I feel good about my decisions

See the respective project pages for my thoughts on projects I did decide to fund. I still stand by most of them.

Early on, I had considerable self-doubt concerning my judgment. This eased as I observed that my peers seem to value my judgment. Mostly this comes up in private conversations, but here’s some concrete evidence:

  1. I was the first donor to more or less all of my grants (with minor caveats), most of which later received significant funding from others (with the notable exception of ChinaTalk, where I allocated the plurality of my pot).
  2. Other regrantors left comments on projects from Holly Elmore, Jordan Schneider, Greg Sadler, and Nuño Sempere that +1’d or deferred to my reasoning.
  3. Cullen O’Keefe is the only person I saw make donations to regrantor budgets, most of which went to me. (Cullen also gave $4.7k to a project that I was responsible for getting on Manifund.)

Regranting brings some of the benefits of “skin in the game”

Previously, my charitable donation decisions felt more disconnected from outcomes. Of course I tend to prefer giving to projects that I think are more impactful. But I would relate to this preference via pontification-as-hobby, encouraging epistemic complacency. 

Regranting forced me to interrogate my process.

Some of this force came from stakes — wasting $10k/year matters less than wasting $100k/year. But I think the large majority of force came from three social factors:

  1. Feeling responsibility for projects happening or not happening,
  2. Not wanting to look dumb publicly, in front of my professional peers, and
  3. Choosing between projects that didn’t come pre-packaged with social approval.

Feeling this force made me think that my previous process was in some sense corrupted. At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.

Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.

Finally, I might audit other aspects of my life for possible inauthenticity.

It had fewer personal downsides than I expected

I had two primary downsides to regranting in mind:

  1. It would be an unwanted time-suck.
  2. It would make people I interact with relate to me differently.

I still think it was reasonable to be worried about both of these things. It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post. And I experienced only a small amount of people relating to me differently, which I felt able to appropriately control.

I’m a bit worried about how the regrantor model scales

I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong. (Perhaps this wouldn’t be true if Manifund received many more project applications. I would guess that, at present, the majority of funded projects are first discovered by regrantors outside the platform.)

(I am more bearish on other possible reasons for skepticism — diminishing deal flow or judgment between regrantors, challenges due to part-time grantmaking, best regrantors having high opportunity cost.)

Still, I love Manifund

Decentralizing and increasing the transparency of funding both seem beneficial at the current margin. My confidence in the skill of regrantors vs. centralized grantmakers has strengthened. And Manifund provide a great grantee experience!

63

0
0

Reactions

0
0

More posts like this

Comments9


Sorted by Click to highlight new comments since:

just stumbled on this it was fascinating to read from the other side. 

One point to emphasize--the "Manifund provide a great grantee experience!" bit! Having dealt with other grantors who've given me money literally 9 months after they said they would, it was so nice working with prompt, functional people on disbursement.

At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.

Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.

 

Thanks for writing this bit; it mirrors my own thinking on my personal donation allocation as I've spent more time in the core EA ecosystem. While I was working at Google, sending a yearly donation to Givewell's top charities seemed reasonable; now I have a much better handle on what opportunities may be more effective.

In fact, your regranting process seems reminiscent of early EA. Pre-Givewell, Holden & Elie spent a bunch of time investigating orgs themselves and made judgement calls about where to send their money. In contrast, EA donations today are characterized by a lot of deference to other experts and evaluators (Givewell, OpenPhil, ACE etc); I like the regranting captures some of the original spirit of the movement.

I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong.

 

I think this could become true eventually; but imo currently, most of our small ($50k) budget regrantors could effectively allocate $200-$500k/year budgets. Eg you mentioned earlier that many opportunities of the form "start this great org" require >$50k; also, many regrants on Manifund include a statement like "I would give more here if I could but my budget is limited".

I also want to note that the overall regranting model can easily scale by adding additional regrantors; we've received a lot of inbound interest in becoming regrantors despite little outreach, and many highly-trusted EA folks (even some grantmakers!) appreciate the greater flexibility offered by the regranting model.

Like @MarcusAbramovitch , I'd feel pretty comfortable allocating ~$1m part-time. I mean just on my existing grants I would've been happy to donate another ~$150k without thinking more about it! Concrete >$50k grants I had to pass up but would otherwise have wanted to fund total >$200k (extremely rough). So I'm already at >$400k (EDIT: per 5 months!) without even thinking about how my behavior or prospective grantee behavior might have changed if I had a larger pot.

That said, I think there's a sense in which I hit strongly diminishing returns at ~$10k, albeit still above-bar. The Robert Long grant was by far my best, and I knew from day 0 that I wanted to make it. After that, a bet on me became a bet on my taste, not a bet on my private information, which seems less exciting. (Again I'm optimistic that the past 5 months was an unusually low-private-information period for me, but you see my point.)

And I'm somewhat skeptical that others had $200k-$500k/year of productive grants to make. To me it's a bad sign that >30% of Manifund funding went to 3 projects (MATS, Apollo, and LTFF) that I wouldn't think especially benefit from the regranting model.

Manifund funding went to... LTFF

This is explained by LTFF/Open Philanthropy doing the imho misguided matching. This has the effect of diverting funding from other places for no clear gain. A lump sum would have been a better option

Fair enough, I agree.

I feel quite able to give >$500k/year. I also think more money would make a lot more "just completely fund the thing" instead of people throwing $1000-2000 for "signal boosting" or hoping others come along to fund the thing.

I feel I could do similar and even larger amounts for the animal welfare space.

It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post.

 

Happy to hear you enjoyed your time regranting! I'd love to get a quick estimate on how much time you spent as a regrantor, just for the purposes of our calibration. My napkin math: (8 grants made * 6h) + (16 grants investigated * 1h) = 64h?

I think my estimate isn't going to be very informative -- I intentionally spent more time than I might otherwise endorse working on Manifund stuff, because it was fun and seemed like good skills-building. My best guess as to how much time I would have spent on an otherwise similar process in the absence of this factor is (EDIT: there was a mistake in my BOTEC) 59 (42 to 85) hours.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while