Yeah I have, and my impression from those I've spoken with is that this has not been the case. You don't think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
I think the following would meaningfully improve how much individuals and organisations use AI:
What do people think? What have I missed?
FWIW the main point I wanted to make in this post is that individuals should not be reaching out to Anthropic staff who don't actively indicate they want to be pitched directly. Part of our strategy is to have a high-trust call to action, but this is mostly based on conversations we've had with Anthropic staff themselves.
I'm not particularly against newer funds coming on to the scene and agree with a lot of the comments in this post about the pros of doing so.
As a small nitpick some of these major funds do give a lot of smaller donations to smaller, less established organizations, so I wouldn't say major funds = money goes to major orgs.
I interpreted this as the challenge of setting up a foundation with one purpose in mind, and then the people you hire executing something different because of the values they bring to the table. In general, I'd guess that people who work in philanthropic spaces skew left-wing, and so whatever mandate you set will end up skewing more left-wing than you intend (if you yourself are not left-wing).
Apologies, by that I mean a few Anthropic staff said one thing that was missing from the donor advisor space was recommendations of what % of their donations to allocate across cause areas, so this is something I tried to make happen by advocating for a few other organisations and individuals to do this.
Hi Abraham, I'm curious what you think about the difference between FTX and this situation is that FTX was disbursing hired grantmakers to do the work. My impression is that most Anthropic staff don't have the time or expertise to set this up themselves, even if it was a model like a giving circle, nor do they want to.
It seems like a challenge here to recreate FTX's level of willingness to fund ambitious projects is that for Anthropic donors, either they'd need to want to spend the time setting up foundations individually, or someone with the right expertise would need to set up their own fund and join the fray on more speculative work.
FWIW my vague impression (I have less visibility into other cause areas) is that as funds anticipate an influx of funding coming into the space, funding more ambitious and speculative bets seems to be a part of the conversation (while hopefully reducing the downsides that came with FTX funding).
I appreciate the efforts to try and bridge two projects you think are valuable. A few thoughts/comments/disagreements:
1. One way to read this seems to me like it could boil down to: if you like EA, but also want some more metacrisis/sensemaking/systems thinking than what EA typically offers, then that's us. Come say hi.
2. I feel like there's some irony here where EA conversation norms tend towards very direct communication, and sensemaking folks tend to speak in a more indirect way. In pitching integral altruism I can't help but get the feeling it is framed in fairly indirect language at times. It's hard to name the exact dynamic but I found myself working hard to understand parts of what this paragraph is trying to say (maybe that's just me):
3. Some of these points seem surprising to include as what is added by integral altruism as they seem to me as a regular part of EA discourse. I'm thinking about the sections that discuss valuing other things in life besides impact, and that inner work can lead to more impact.
4. I think a big decision point here is whether or not the merits of integral altruism will be argued on the territory of EA assumptions or not, and this post seems to move between the two. For example, you make the claim that there are real downsides to seeing x-risk in isolation rather than in the way it is interconnected with other problems. This seems big and important if true, and seems like something that could be argued comfortably within the framework of EA norms. I appreciate that puts the burden on you, but if you persuade folks here, I imagine that would be a big win for everyone. FWIW whenever I've listened to folks talk about the metacrisis I've literally not been able to understand the arguments. Could be a huge service to try and make the case for the metacrisis in EA friendly language.