FWIW the main point I wanted to make in this post is that individuals should not be reaching out to Anthropic staff who don't actively indicate they want to be pitched directly. Part of our strategy is to have a high-trust call to action, but this is mostly based on conversations we've had with Anthropic staff themselves.
I'm not particularly against newer funds coming on to the scene and agree with a lot of the comments in this post about the pros of doing so.
As a small nitpick some of these major funds do give a lot of smaller donations to smaller, less established organizations, so I wouldn't say major funds = money goes to major orgs.
I interpreted this as the challenge of setting up a foundation with one purpose in mind, and then the people you hire executing something different because of the values they bring to the table. In general, I'd guess that people who work in philanthropic spaces skew left-wing, and so whatever mandate you set will end up skewing more left-wing than you intend (if you yourself are not left-wing).
Apologies, by that I mean a few Anthropic staff said one thing that was missing from the donor advisor space was recommendations of what % of their donations to allocate across cause areas, so this is something I tried to make happen by advocating for a few other organisations and individuals to do this.
Hi Abraham, I'm curious what you think about the difference between FTX and this situation is that FTX was disbursing hired grantmakers to do the work. My impression is that most Anthropic staff don't have the time or expertise to set this up themselves, even if it was a model like a giving circle, nor do they want to.
It seems like a challenge here to recreate FTX's level of willingness to fund ambitious projects is that for Anthropic donors, either they'd need to want to spend the time setting up foundations individually, or someone with the right expertise would need to set up their own fund and join the fray on more speculative work.
FWIW my vague impression (I have less visibility into other cause areas) is that as funds anticipate an influx of funding coming into the space, funding more ambitious and speculative bets seems to be a part of the conversation (while hopefully reducing the downsides that came with FTX funding).
Hi Nick, thanks for engaging. I agree that in writing this, there is a level of scrutiny I've opened myself up to. I'll respond to some of the main points:
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
I think the following would meaningfully improve how much individuals and organisations use AI:
What do people think? What have I missed?