> early critiques of GiveWell were basically "Who are you, with no background in global development or in traditional philanthropy, to think you can provide good charity evaluations?"
That seems like a perfectly reasonable, fair challenge to put to GiveWell. That’s the right question for people to ask!
I agree with this if you read the challenge literally, but the actual challenges were usually closer to a reflexive dismissal without actually engaging with GiveWell's work.
Also, I disagree that the only way we were able to build trust in GiveWell was through this:
only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
Dustin Moskovitz's net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that's at least $6 billion.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/CG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin's time frame.
lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing?
You're probably doubting this because you don't think it's a good way to spend money. But that doesn't mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/community building.
If there is an influx of money into 'that sort of thing' in 2026/2027, I'd expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
My understanding of what happened is different:
I’m somewhat surprised about the lack of information about Anthropic employee’s donation plans.
Potential reasons:
Interested to hear whether I’ve missed a major consideration and whether people have takes about which of these reasons is most likely/explanatory.
The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn't make me hopeful, especially the bolded parts:
An early activist at Stop AI had a mental health crisis and went missing. He hit the leader and said stuff he'd never condone anyone in the group to say, and apologized for it after. Two takeaways:
- Act with care. Find Sam.
- Stop the 'AGI may kill us by 2027' shit please.[...]
I advised Stop AI organisers to change up the statement before they put it out. But they didn't. How to see this is is a mental health crisis. Treat the person going through it with care, so they don't go over the edge (meaning: don't commit suicide). 2/
The organisers checked in with Sam everyday. They did everything they could. Then he went missing. From what I know about Sam, he must have felt guilt-stricken about lashing out as he did. He left both his laptop and phone behind and the door unlocked. I hope he's alive. 3/
Sam panicked often in the months before. A few co-organisers had a stern chat with him, and after that people agreed he needed to move out of his early role of influence. Sam himself was adamant about being democratic at Stop AI, where people could be voted in or out. 4/
You may wonder whether that panic came from hooking onto some ungrounded thinking from Yudkowsky. Put roughly: that an ML model in the next few years could reach a threshold where it internally recursively improves itself and then plan to take over the world in one go. 5/
That's a valid concern, because Sam really was worried about his sister dying out from AI in the next 1-3 years. We should be deeply concerned about corporate-AI scaling putting the sixth mass extinction into overdrive. But not in the way Yudkowsky speculates about it. 6/
Stop AI also had a "fuck-transhumanism" channel at some point. We really don't like the grand utopian ideologies of people who think they can take over society with 'aligned' technology. I've been clear on my stance on Yudkowsky, and so have others. 7/
Transhumanist takeover ideology is convenient for wannabe system dictators like Elon Musk and Sam Altman. The way to look at this: They want to make people expendable. 8/
[...]
Thanks a lot for engaging!
One general point: My rough guess is that acceptance rates have stayed largely constant across AI safety programs over the last ~2 years because capacity has scaled with interest. For example, Pivotal grew from 15 spots in 2024 to 38 in 2025. While the 'tail' likely became more exceptional, my sense is that the bar for the marginal admitted fellow has stayed roughly the same.
They might (as I am) be making as many applications as they have energy for, such that the relevant counterfactual is another application, rather than free time.
The model does assume that most applicants aren't spending 100% of their time/energy on applications. However, even if they were, I feel like a lot of this is captured by how much they value their time. I think that the counterfactual of how they spend their time during the fellowship period (which is >100x more hours than the application process) is the much more important variable to get right.
you also need to consider the intangible value of the counterfactual
This is correct. I assumed most people would take this into account (e.g. subtract their current job's networking value from the fellowship's value), but I might add a note to make this explicit.
you also ought to consider the information value of applying for whatever else you might have spent the time on
I’m less worried about this one. Since we set the fixed Value of Information quite conservatively already, and most people aren't constantly working on applications, I suspect this is usually small enough to be noise in the final calculation.
there is a psychological cost to firing out many low-chance applications
I agree this is real, but I think it's covered in the Value of Your Time. If you earn £50/hr but find applying on the weekend fun/interesting, you might set the Value of Your Time at £5/hr. If you are unemployed but find applying extremely aversive, you might price your time at e.g., £200/hr.
If you took this seriously, in 2011 you'd have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
But, you could have just looked at their website (GiveWell, Charity Navigator) and tried to figure out yourself whether one of these organisations is better at evaluating charities.
This feels like a Motte ("skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study") and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.