Funding I found after some googling:
-
Tallinn's (and Musk’s) seed investments in DeepMind¹
-
OpenPhil's $30M grant to OpenAI²
-
FTX's $500M³, Tallinn's, Moskovitz’(and Schmidt’s)⁴ investments in Anthropic
I’m curious how you consider the consequences of this support (regardless of original intentions).
What would have happened if this funding had not been offered (at that start-up stage), considering some counterfactual business-as-usual scenarios?
Indirect support was offered as well by leaders active in the AI Safety community:
- 80K’s job recommendations
- AISC’s research training
- Fathom Radiant’s supercomputer
- FLI’s 2015 conference (which Musk attended, afterward co-founding OpenAI)
- MIRI's singularity summit (that enabled Hassabis and Legg to pitch their biggest investor, Thiel, for DeepMind)
- FHI public intellectuals taking positions at DeepMind
- MIRI moving the Overton window over AGI
On one hand, I’m curious if you have specific thoughts on what indirect support may have led to. On the other hand, it’s easy there to get vague and speculative.
So how about we focus on verified grants first?
What are your current thoughts? Any past observations that could ground our thinking?
Links:
Thanks Ryan. Obviously this is a topic where there will be a wide range of opinions. I would be interested to hear what as Holden Karnofsky thinks of this grant five years later. He may still have written about it, would appreciate if someone could point me to it if he has
My big initial concern with both the grant proposal and your comment here is that both of you don't mention perhaps the most important potential negative, that they grant could have played a role in accelerating the march towards dangerous Ai. Instead you mention EA "implicit endorsement" as being more important.
Even if we assume for the moment that the effect of the grant was net positive in increasing the safety of OpenAI itself, what if it accelerated their progress just a little and helped create this dangerous race we are in. When the head of Microsoft says "the race is on" basically referring to chatGPT, if this grant made even a 0.001 percent contribution to speeding up that race, which seems plausible then the grant could st theill be strongly net negative.
I don't have a problem with your positive opinion (although I strongly disagree), but think it is good to engage with the stronger counterpoints, rather than what I think is a bit of a strawman with the "implicit endorsement" negative.
Suppose the grant made the race 0.001% faster overall, but made OpenAI 5% more focused on alignment. That seems like an amazingly good trade to me.
This is quite sensitive to the exact quantitative details and I think the speed up is likely way, way more than 0.001%.
Oh, I just think the effect of the 30 million dollars is way smaller than the total value of labor from EAs working at OpenAI such that the effect of the money is dominated by EAs being more likely to work there. I'm not confident in this, but the money seems pretty unimportant ex-post while the labor seems quite important.
I think the speed up in timelines from people with EA/longtermist motivations working at OpenAI is more like 6 months to 3 years (I tend to think this speed up is bigger than o... (read more)
Do you think it ended up having a net positive impact so far?