Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
I do not understand Dario's[1] thought process or strategy really
At a (very rough) guess, he thinks that Anthropic alone can develop AGI safely, and they need money to keep up with OpenAI/Meta/any other competitors because they're going to cause massive harm to the world and can't be trusted to do so?
If that's true then I want someone to hold his feet to the fire on that, in the style of Gary Marcus telling the Senate hearing that Sam Altman had dodged their question on what his 'worst fear' was - make him say it in an open, political hearing as a matter of record.
Dario Amodei, Founder/CEO of Anthropic
I'm basically making the same point as the parent comment, although perhaps a bit more starkly, and with the additional point about lack of democratic mandate. Yet that's on +36 karma and mine is on -6. This is why we need a separate "outside game" movement on AI x-safety.