One take is that what is happening is that the movement cares more about animal welfare as a cause area over time, but that the care and concern for AI safety/x-risk reduction has increased even more, and so people are shifting their limited time and resources towards those cause areas. This leads to the dynamic of the movement wanting animal advocacy efforts to win, but not being the ones to dedicate their donations or career to the effort.
Thanks for sharing your thoughts Tyler. I tend to think that 2 & 3 tends to account for funding discrepancies.
I do think at the same time there might be a discrepancy in ideal and actual allocation of talent, with so many EAs focused on working in AI safety/x-risk reduction. To be clear I think these are incredibly important and think every, but that maybe a few EAs who are on the fence should work in animal advocacy.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they've been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what's going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there's a reluctance to use the same methodology for something so much more uncertain, because it's a less useful tool/there's a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here's how I'd spend it, and why.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP's moral weights project).
Some rough thoughts: It's when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it's harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world's poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I'm recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don't want to speak for them my understanding is they might be doing more in this space soon.
I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you're either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.
What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now.
I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper.
It's great to hear that being on the front foot and reaching out to people with specific offers has worked for you.
I actually want to push back on your advice for many readers here. I think for many people who aren't getting jobs, the reason is not because the jobs are too competitive, but that they're not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level of talent or skill is needed to meaningfully contribute.
In addition to applying for more jobs or reaching out to people directly, I'd also recommend:
I realise short timelines makes this all much harder, but I do think many people early in their career do their best work in the environment of an organisation, team, manager, etc.
I agree re the career problem. I wonder how much additional money would fix the problem vs other issues like the cultures of the two movements/ecosystems, status of working in the spaces, etc.