The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement.


While Microsoft has sought to integrate OpenAI’s technology into many of its own services, Google’s relationship with Anthropic is limited to acting as the company’s tech supplier in what has become an AI arm’s race, according to people familiar with the arrangement.


"Anthropic has developed an intelligent chatbot called Claude, rivalling OpenAI’s ChatGPT, though it has not yet been released publicly."

"The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries."


"Google’s investment was made by its cloud division, run by former Oracle executive Thomas Kurian. Bringing Anthropic’s data-intensive computing work to Google’s data centres is part of an effort to catch up with the lead Microsoft has taken in the fast-growing AI market thanks to its work with OpenAI. Google’s cloud division is also working with other start-ups such as Cohere and C3 to try to secure a bigger foothold in AI."

New comment
6 comments, sorted by Click to highlight new comments since: Today at 2:03 AM

I'd be interested in hearing someone from Anthropic discuss the upsides or downsides of this arrangement. From an entirely personal standpoint, it seems odd that Anthropic gave up equity AND had restrictions in how the investment could be used. That said, I imagine there are MANY other details about I'm not aware of since I wasn't involved in the decision. 

From an entirely personal standpoint, it seems odd that Anthropic [...] had restrictions in how the investment could be used.

My assumption is (and I'm definitely not sure about this) that restricting funding to compute is not very restrictive at all, given that (a) Anthropic probably does and will spend large sums of money on compute, likely more than this investment covers, and (b) the money they're currently spending on compute can easily be shifted to other areas now that it's freed up. (If Anthropic aren't currently using Google Cloud Platform, I guess it's more restrictive in that it forces Anthropic to migrate to another cloud service provider.) But yeah, I'd also be curious to hear an insider's view on this.

Thanks for posting, not seeing a lot of people talking about this (plausibly quite important!) event. 

I think it makes sense to be worried that Anthropic devolves into a mere armsracer over subsequent years, though the specific role of industry partnerships in increasing this worry is something I'm less confident about. (I've been told by an Anthropic employee his reasons why he's not worried about this, but that's different from credible signals or commitment from leadership, somehow). 

What are those reasons, please?

Nah I'm not gonna propagate a second-hand account, sorry. Too easy to get details wrong etc. 

I'm also surprised that it starts as "let's work on safety together! Let's share ideas and work on a good thing", then some entity grows bigger and bigger, starts taking unilateral decisions that are controversial, still using the name and support of community

I feel the same scheme as with SBF: first, the community is used to build The Thing. Then, The Thing forgets about anything ethical or safe and just turns into an "effective profit/PR maxmizer". I feel kinda conned and used. Even when I'm mostly talking to AI ethicists now, I still regarded Anthropic as something not evil. Even I was shocked.

I ask people to demand answers from them. I feel there's a no-confidence case for us trusting Anthropic to do what they are doing well.

I was more about "let's have safety and ethics people together" (which ethicists didn't like), less and less in time. Now I don't know anymore. I want answers.

I feel traumatized in general by the safety community and EA. I was doing research internships at Google and CHAI Berkeley. I was doing later an ethics nonprofit. All of those were somewhat EA-aligned (not 100% outside). I don't know how can I trust people who say "safety" anymore.

What is going on?