Oops, my colleague checked again and the Future Perfect inclusions (Keley and Sigal) are indeed a mistake; OP hasn't funded Future Perfect. Thanks for the correction. (Though see e.g. this similar critical tweet from OP grantee Matt Reardon.)
Re: Eric Neyman. We've funded ARC before and would do so again depending on RFMF/etc.
I'm following up here with a convenience sample of examples of OP staff and grantees criticizing frontier AI companies, collected by one of my colleagues, since some folks seem to doubt how common this is:
Thanks! This does seem helpful.
One random question/possible correction:
Is Kelsey an OpenPhil grantee or employee? Future Perfect never listed OpenPhil as one of its funders, so I am a bit surprised. Possibly Kelsey received some other OP grants, but I had a bit of a sense Kelsey and Future Perfect more general cared about having financial independence from OP.
Relatedly, is Eric Neyman an Open Phil grantee or employee? I thought ARC was not being funded by OP either. Again, maybe he is a grantee for o...
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of...
"It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don't think it's uncommon for someone who secretly suspects it's all a load of nonsense to diplomatically say a statement like ...
Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work
My best guess (though very much not a confident guess) is the aggregate of these efforts are net-negative, and I think that is correlated with that work having happened in backrooms, often in context where people were unable to talk about their honest motivations. It sure is really hard to tell, but I really want people to consider the hypoth...
If you know people who could do good work in the space, please point them to our RFP! As for being anti-helpful in some cases, I'm guessing that was cases where we thought the opportunity wasn't a great opportunity despite it being right-of-center (which is a point in favor, in my opinion), but I'm not sure.
Replying to just a few points…
I agree about tabooing "OP is funding…"; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.
Re: "large negative incentive for founders and organizations who are considering working more with the political right." I'll note that we've consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that "engages...
Good Ventures did indicate to us some time ago that they don't think they're the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment's claim about an aversion to opportunities that are "even slightly right of center in any policy work," (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don't actually think this should affect people's actions much...
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I'm following up here with a convenience sample of examples of OP staff and grantees criticizing frontier AI companies, collected by one of my colleagues, since some folks seem to doubt how common this is:
I think it might be a good idea to taboo the phrase "OP is funding X" (at least when talking about present day Open Phil).
Historically, OP would have used the phrase "OP is funding X" to mean "referred a grant to X to GV" (which approximately was never rejected). One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was "rejecting X" or "...
I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages
(Fwiw, the Metaculus crowd prediction on the question ‘Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?’ currently sits at 43%.)
Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well...
Re: why our current rate of spending on AI safety is "low." At least for now, the main reason is lack of staff capacity! We're putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we'd like. If you want our AI safety spending to grow faster, please encourage people to apply!
I'll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.
Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn't approve grants. We all still collaborate to some degree, but new hires shouldn't e.g. expect to work closely with Holden.
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is "yes." In general, I really did mean the "tentative" in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren't very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.
Indeed. There aren't hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team's "jurisdiction." We just try to communicate about it a lot, and our team leads aren't very possessive about their territory — we just want to get the best stuff done!
The hiring is more incremental than it might seem. As explained above, Ajeya and I started growing our teams earlier via non-public rounds, and are now just continuing to hire. Claire and Andrew have been hiring regularly for their teams for years, and are also just continuing to hire. The GCRCP team only came into existence a couple months ago and so is hiring for that team for the first time. We simply chose to combine all these hiring efforts into one round because that makes things more efficient on the backend, especially given that many people might ...
The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more "direct" work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she'd like to hire more. So the answer to "Why now?" on alignment grantmaking is "Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initia...
There are now also two superforecaster forecasts about this:
Another historical point I'd like to make is that the common narrative about EA's recent "pivot to longtermism" seems mostly wrong to me, or at least more partial and gradual than it's often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then.
MacAskill was definitely a longtermist in 2012. But I don't think he mentioned it in Doing Good Better, or any of the more public/introductory narrative around EA.
I think the "pivot to longermism" narrative is a reaction to a change in communication strategy (80000 hours becoming explicitly longtermist, EA intro materials becoming mostly longtermist). I think critics see it as a "sharp left turn" in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.
There's a previous discussion h...
What's his guess about how "% of humans enslaved (globally)" evolved over time? See e.g. my discussion here.
How many independent or semi-independent abolitionist movements were there around the world during the period of global abolition, vs. one big one that started with Quakers+Britain and then was spread around the world primarily by Europeans? (E.g. see footnote 82 here.)
Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn't notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.
We have a report on conscious subsystems coming out I believe next week, which considers the possibility of non-reportable valenced conscious states.
Also (speaking only about my own impressions), I'd say that while some people who talk about neuron counts might be thinking of hidden qualia (eg Brian Tomasik), it's not clear to me that that is the assumption of most. I don't think the hidden qualia assumption, for example, is an explicit assumption of Budolfson and Spears or of MacAskill's discussion in his book (though of course I can't speak to what they believe privately).
Re: Shut Up and Divide. I haven't read the other comments here but…
For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will M...
+1, I'd also recommend using colours that are accessible for people with colour vision deficiency.
As someone with a fair amount of context on longtermist AI policy-related grantmaking that is and isn't happening, I'll just pop in here briefly to say that I broadly disagree with the original post and broadly agree with [https://forum.effectivealtruism.org/posts/Xfon9oxyMFv47kFnc/some-concerns-about-policy-work-funding-and-the-long-term?commentId=TEHjaMd9srQtuc2W9](abergal's reply).
Hey Luke. Great to hear from you. Also thank you for your push back on an earlier draft where I was getting a lot of stuff wrong and leaping to silly conclusions, was super helpful. FWIW I don’t know how much any of this applies to OpenPhil.
Just to pin down on what it is you agree / disagree with:
For what it is worth I also broadly agree with abergal's reply. The tl;dr of both the original post / abergal's comment is basically the same: hey it [looks like from the outside/ is the case that] the LTFF is applying a a much higher bar to direct policy work tha...
FWIW I don't use "theory of victory" to refer to 95th+ percentile outcomes (plus a theory of how we could plausibly have ended up there). I use it to refer to outcomes where we "succeed / achieve victory," whether I think that represents the top 5% of outcomes or the top 20% or whatever. So e.g. my theory of victory for climate change would include more likely outcomes than my theory of victory for AI does, because I think succeeding re: AI is less likely.
FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.
Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm ex...
Thanks for this comment. I really appreciate what you said about just being excited to help others as much as possible, rather than letting insecurities get the better of you.
Interesting that you mentioned the idea of an EA webzine because I have been toying with the idea of creating a blog that shares EA ideas in a way that would be accessible to lots of people. I’m definitely going to put some more thought into that idea.
Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)
Despite ample need for materials science in pandemic prevention, electrical engineers in climate change, civil engineers in civilisational resilience, and bioengineering in alternative proteins, EA has not yet built a community fostering the talent needed to meet these needs.
Also engineers who work on AI hardware, e.g. to help develop the technologies and processes needed to implement most compute governance ideas!
Nunn-Lugar; see quick summary here: https://www.openphilanthropy.org/blog/ai-governance-grantmaking
practice the virtue of silence
Honest question: Besides the negative example of myself, can you or the OP give some examples of practicing or not practicing this virtue?
Motivation: The issue here is heterogeneity (which might be related in a deeper way to the post itself).
I think some readers are going to overindex on this, and become very silent, while myself, or other unvirtuous people will just ignore it (or even pick at it irritatingly).
So without more detail, the result could be the exact opposite of what you want.
An important point here is that if you're considering this move, there's a decent/good chance you'll be able to find career transition funding so that you can have 3-12mo of runway during which you can full-time talk to people, read lots of stuff, apply to lots of things, etc. after you quit your job, so that you don't have to burn through much or any of your savings while trying to make the transition work.
I agree. To point to a singular source of funding to complete this call to action, I encourage relevant onlookers to look into the Effective Altruism Infrastructure Fund.
It's a fair question. Technically speaking, of course progress can be more incremental, and some small pieces can be built on with other small pieces. Ultimately that's what happened with Khan's series of papers on the semiconductor supply chain and export control options. But in my opinion that kind of thing almost never really happens successfully when it's different authors building on each other's MVPs (minimum viable papers) rather than a single author or team building out a sorta-comprehensive picture of the question they're studying, with all the context and tacit knowledge they've built up from the earlier papers carrying over to how they approach the later papers.
I am copying footnote 19 from the post above into this comment for easier reference/linking:
The "defense in depth" concept originated in military strategy (Chierici et al. 2016; Luttwak et al. 2016, ch. 3; Price 2010), and has since been applied to reduce risks related to a wide variety of contexts, including nuclear reactors (International Nuclear Safety Advisory Group 1996, 1999, 2017; International Atomic Energy Agency 2005; Modarres & Kim 2010; Knief 2008, ch. 13.), chemical plants (see "independent protection layers" and "layers of protection anal... (read more)