AI strategy & governance. ailabwatch.org.
My top candidates:
- AI Safety and Governance Fund
- PauseAI US
- Center for AI Policy
- Palisade
- MIRI
A classification of every other org I reviewed:
Good but not funding-constrained: Center for AI Safety, Future of Life Institute
Would fund if I had more money: Control AI, Existential Risk Observatory, Lightcone Infrastructure, PauseAI Global, Sentinel
Would fund if I had a lot more money, but might fund orgs in other cause areas first: AI Policy Institute, CEEALAR, Center for Human-Compatible AI, Manifund
Might fund if I had a lot more money: AI Standards Lab, Centre for the Governance of AI, Centre for Long-Term Policy, CivAI, Institute for AI Policy and Strategy, METR, Simon Institute for Longterm Governance
Would not fund: Center for Long-Term Resilience, Center for Security and Emerging Technology, Future Society, Horizon Institute for Public Service, Stop AI
Your ranking is negatively correlated with my (largely deference-based) beliefs (and I think weakly negatively correlated with my inside view). Your analysis identifies a few issues with orgs-I-support that seem likely true and important if true. So this post will cause me to develop more of an inside view or at least prompt the-people-I-defer-to with some points you raise. Thanks for writing this post. [This is absolutely not an endorsement of the post's conclusions. I have lots of disagreements. I'm just saying parts of it feel quite helpful.]
Here's my longtermist, AI focused list. I really haven't done my research, e.g. I read zero marginal funding posts. This is mostly a vote for MATS.
I would have ranked The Midas Project around 5 but it wasn't an option.
"Improve US AI policy 5 percentage points" was defined as
Instead of buying think tanks, this option lets you improve AI policy directly. The distribution of possible US AI policies will go from being centered on the 50th-percentile-good outcome to being centered on the 55th-percentile-good outcome, as per your personal definition of good outcomes. The variance will stay the same.
(This is still poorly defined.)
A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.
I've tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.
I believe that Anthropic's policy advocacy is (1) bad and (2) worse in private than in public.
But Dario and Jack Clark do publicly oppose strong regulation. See https://ailabwatch.org/resources/company-advocacy/#dario-on-in-good-company-podcast and https://ailabwatch.org/resources/company-advocacy/#jack-clark. So this letter isn't surprising or a new betrayal — the issue is the preexisting antiregulation position, insofar as it's unreasonable.
Actually, this is a poor description of my reaction to this post. Oops. I should have said:
Digital mind takeoff is maybe-plausibly crucial to how the long-term future goes. But this post seems to focus on short-term stuff such that the considerations it discusses miss the point (according to my normative and empirical beliefs). Like, the y-axis in the graphs is what matters short-term (and it's at most weakly associated with what matters long-term: affecting the von Neumann probe values or similar). And the post is just generally concerned with short-term stuff, e.g. being particularly concerned about "High Maximum Altitude Scenarios": aggregate welfare capacity "at least that of 100 billion humans" "within 50 years of launch." Even ignoring these particular numbers, the post is ultimately concerned with stuff that's a rounding error relative to the cosmic endowment.
I'm much more excited about "AI welfare" work that's about what happens with the cosmic endowment, or at least (1) about stuff directly relevant to that (like the long reflection) or (2) connected to it via explicit heuristics like the cosmic endowment will be used better in expectation if "AI welfare" is more salient when we're reflecting or choosing values or whatever.
My impression is that CLTR mostly adds value via its private AI policy work. I agree its AI publications seem not super impressive but maybe that's OK.
Probably same for The Future Society and some others.