Funding I found after some googling:
-
Tallinn's (and Musk’s) seed investments in DeepMind¹
-
OpenPhil's $30M grant to OpenAI²
-
FTX's $500M³, Tallinn's, Moskovitz’(and Schmidt’s)⁴ investments in Anthropic
I’m curious how you consider the consequences of this support (regardless of original intentions).
What would have happened if this funding had not been offered (at that start-up stage), considering some counterfactual business-as-usual scenarios?
Indirect support was offered as well by leaders active in the AI Safety community:
- 80K’s job recommendations
- AISC’s research training
- Fathom Radiant’s supercomputer
- FLI’s 2015 conference (which Musk attended, afterward co-founding OpenAI)
- MIRI's singularity summit (that enabled Hassabis and Legg to pitch their biggest investor, Thiel, for DeepMind)
- FHI public intellectuals taking positions at DeepMind
- MIRI moving the Overton window over AGI
On one hand, I’m curious if you have specific thoughts on what indirect support may have led to. On the other hand, it’s easy there to get vague and speculative.
So how about we focus on verified grants first?
What are your current thoughts? Any past observations that could ground our thinking?
Links:
I think OpenPhil's grant to OpenAI is quite likely the best grant that OpenPhil has made in terms of counterfactual positive impact.
It's worth noting that OpenPhil's grant to OpenAI was done in order to acquire a board seat and generally to establish a relationship rather than being done because adding more money to OpenAI was a good use of funds at the margin.
See the grant write up here which discusses the motivation for the grant in detail.
Generally, I think influencing OpenAI was made notably easier via this grant (due to the board seat) and this influence seems quite good and has led to various good consequences (increased emphasis on AGI alignment for example).
The cost in dollars is quite cheap.
The main downside I can imagine is that this grant served as an implicit endorsement of OpenAI which resulted in a bunch of EAs working there which was then net-negative. My guess is that having these EAs work at OpenAI was probably good on net (due to a combination of acquiring influence and safety work - I don't currently think the capabilities work was good on its own).
Do you think it ended up having a net positive impact so far?