Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.


(Thread continues from there with more details -- seems like a notable major development!)

38

1
0

Reactions

1
0
Comments34


Sorted by Click to highlight new comments since:

If this is true, I will update even further in the direction of the creation of anthropic being a net negative to the world.

Amazon is a massive multinational driven by profit almost alone, that will be continuously pushing for more and more, while paying less and less attention to safety.

It surprised me a bit that anthropic would allow this to happen.

Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I'd put this tax at 50% (rough order of magnitude number).

If Anthropic were solely funded by EA money, and didn't capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.

I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you'd need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?

I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that's remotely competitive with OpenAI, which  means there's no external target for this Amazon investment.  It's not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI's earlier GPT scaling successes.

What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?

There are alternatives - x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.

It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.

I'm not sure if they would've ramped up quite so quick (i.e. getting massive investment) if it wasn't for the race heating up with Anthropic entering. Either way, it's all bad, and a case of which is worse.

This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their "responsible scaling" policy is anything but (it's basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).

Yeah, not sure how much this is good news and the level of interference and vested interests that will inevitably come up. 

I was going to reply to this comment, but after seeing the comments here, I've decided to abstain from sharing information on this specific post. The confidence that people here have about this being bad news, rather than uncertain news, indicates very dangerous levels of incompetence, narrow-mindedness, and even unfamiliarity with race dynamics (e.g. how one of the main risks of accelerating AI, even early on, comes from the creation of executives and AI engineers who neurotically pursue AI acceleration).

NickLaing is just one person and if one person doesn't have a complete picture then that's not a big deal, that's random error and it happens to everyone. When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that's a very serious issue. I now have a better sense of why Yudkowsky became apprehensive about writing about AI publicly, or why Dustin Moscovitz throws his weight behind Anthropic and insists that they're the good guys. If the people here would like to attempt to develop a perspective on race dynamics, they can start with the Yudkowsky Christiano debate which is balanced, or Yudkowsky's List of Lethalities and Christiano's response. Johnswentworth just put up a great post relevant to the topic. Or just read Christiano's response or Holden's Cold Takes series, the important thing here isn't balance, it's about having any perspective at all on race dynamics before you decide whether to tear into Anthropic's reputation.

lilly
20
7
2
1

Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a "very serious issue," and lacking "any perspective at all." 

This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.

This makes sense and it changed my mind, rudeness should stay on Lesswrong where Bayes Points rule the scene. Also, at the time I'm leaving this comment, the distribution of support on this page has shifted such that the ratio of opposition to the deal to uncertainty about the deal is less terrible; it was pretty bad when I wrote this comment.

I still think that people are too harsh on Anthropic, and that has consequences. I was definitely concerned as well when I first found out about this; Amazon plays hardball, and is probably much more capable of doing cultural investigations and appearing harmless than Anthropic thinks. Nickliang's comment might have been far more carefully worded than I thought. But at the same time, if Dustin opposes the villainization of Anthropic and Yudkowsky is silent on the matter, that seems like mobbing Anthropic is the wrong move with serious real-life consequences.

I consider this sort of "oh, I have a take but you guys aren't good enough for it" type perspective deeply inappropriate for the Forum -- and I say that as someone who is considerably less "anti-Anthropic" than some of the comments here.

That's plausibly good for community-building, but from the infosec approach, you don't really know what kinds of people are reading the comments, or what kind of person they will be in a year or so. In an extreme scenario, people could start getting turned. But the more likely outcome is that people hired by various bigcorps (and possibly intelligence agencies) are utilizing EAforum for open-source intelligence; this is far more prevalent than most people think.

Hey Trevor thanks for the reply, personally I think the downvoting is a bit harsh. It's true I'm not an AI expert in any sense, and that this is a hot take without a deep look into the situation. You aren't wrong there.

To be fair on myself, I didn't take an aggressive stage on anthropic, just said that I was updating more towards them being net negative.

I do agree there is enormous uncertainty here, but I think that should men we are less harsh on hot takes from all ends of the spectrum, and more willing to engage with a wide range of perspectives.

I don't agree with this "When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that's a very serious issue."

For me this isn't be a "very serious issue", it should just give you an idea of what many people s initial reactions are, and show you the arguments you need to refute or add nuance to. Why is this so serious?

I don't think it's at all obvious whether this development is good or bad (though I would lean towards bad), but both here and on LessWrong you have not made a coherent attempt to support your argument. Your concept of "redundancy" in AI labs is confusing and the implied connection to safety is tenuous.

Your concept of "redundancy" in AI labs is confusing and the implied connection to safety is tenuous.

Sorry to nitpick, but I think this specific sentence isn't true at all; my concept of "redundancy" wasn't confusing and the implied connection to safety isn't tenuous.

I am curious if the FTX stake in Anthropic is now valuable enough to plausibly bail out FTX? Or at least put a dent in the amount owed to customers who were scammed?

I've lost track of the gap between assets and liabilities at FTX, but this is a $4B investment for a minority stake, according to news reports. Which implies Anthropic has a post-money valuation of at least $8B. Anthropic was worth $4.6B in June according to this article. So the $500M stake reportedly held by FTX should might be worth around double whatever it was worth in June, and possibly quite a bit more.

Edit: this article suggests the FTX asset/liability gap was about $2B as of June. So the rise in valuation of the Anthropic stake is certainly a decent fraction of that, though I'd be surprised if it's now valuable enough to cover the entire gap.

Edit 2: the math is not quite as simple as I made it seem above, and I've struck out the word "should" to reflect that. Anyway, I think the question is still the size of the minority share that Amazon bought (which has not been made public AFAICT) as that should determine Anthropic's market cap.

I do not understand Dario's[1] thought process or strategy really

At a (very rough) guess, he thinks that Anthropic alone can develop AGI safely, and they need money to keep up with OpenAI/Meta/any other competitors because they're going to cause massive harm to the world and can't be trusted to do so?

If that's true then I want someone to hold his feet to the fire on that, in the style of Gary Marcus telling the Senate hearing that Sam Altman had dodged their question on what his 'worst fear' was - make him say it in an open, political hearing as a matter of record.

  1. ^

    Dario Amodei, Founder/CEO of Anthropic

See Dario's Senate testimony from two months ago:

With the fast pace of progress in mind, we can think of AI risks as falling into three buckets:

●  Short-term risks are those present in current AI systems or that imminently will be present. This includes concerns like privacy, copyright issues, bias and fairness in the model’s outputs, factual accuracy, and the potential to generate misinformation or propaganda.

●  Medium-term risks are those we will face in two to three years. In that time period, Anthropic’s projections suggest that AI systems may become much better at science and engineering, to the point where they could be misused to cause large-scale destruction, particularly in the domain of biology. This rapid growth in science and engineering skills could also change the balance of power between nations.

●  Long-term risks relate to where AI is ultimately going. At present, most AI systems are passive and merely converse with users, but as AI systems gain more and more autonomy and ability to directly manipulate the external world, we may face increasing challenges in controlling them. There is a spectrum of problems we could face related to this, at the extreme end of which is concerns about whether a sufficiently powerful AI, without appropriate safeguards, could be a threat to humanity as a whole – referred to as existential risk. Left unchecked, highly autonomous, intelligent systems could also be misused or simply make catastrophic mistakes.

Note that there are some concerns, like AI’s effects on employment, that don’t fit neatly in one bucket and probably take on a different form in each time period.

Short-term risks are in the news every day and are certainly important. I expect we’ll have many opportunities to discuss these in this hearing, and much of Anthropic’s research applies immediately to those risks: our constitutional AI principles include attempts to reduce bias, increase factual accuracy, and show respect for privacy, copyright, and child safety. Our red-teaming is designed to reduce a wide range of these risks, and we have also published papers on using AI systems to correct their own biases and mistakes. There are a number of proposals already being considered by the Congress relating to these risks.

The long-term risks might sound like science fiction, but I believe they are at least potentially real. Along with the CEOs of other major AI companies and a number of prominent AI academics (including my co-witnesses Professors Russell and Bengio) I have signed a statement emphasizing that these risks are a challenge humanity should not neglect. Anthropic has developed evaluations designed to measure precursors of these risks and submitted its models to independent evaluators. And our work on interpretability is also designed to someday help with long-term risks. However, the abstract and distant nature of long-term risks makes them hard to approach from a policy perspective: our view is that it may be best to approach them indirectly by addressing more imminent risks that serve as practice for them.

The medium-term risks are where I would most like to draw the subcommittee’s attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology.

Thanks for linking Dario's testimony. I actually found this extract which was closer to answering my question:

I wanted to answer one obvious question up front: if I truly believe that AI’s risks
are so severe, why even develop the technology at all? To this I have three answers: 

First, if we can mitigate the risks of AI, its benefits will be truly profound. In the next few years it could greatly accelerate treatments for diseases such as cancer, lower the cost of energy, revolutionize education, improve efficiency throughout government, and much more. 

Second, relinquishing this technology in the United States would simply hand over its power, risks, and moral dilemmas to adversaries who do not share our values. 

Finally, a consistent theme of our research has been that the best mitigations to the risks of powerful AI often also involve powerful AI. In other words, the danger and the solution to the danger are often coupled. Being at the frontier thus puts us in a strong position to develop safety techniques (like those I’ve mentioned above), and also to see ahead and warn about risks, as I’m doing today.

I know this statement would have been massively pre-prepared for the hearing, but I don't feel super convinced by it:

On his point 1) such benefits have to be weighed up against the harms, both existential and not. But just as many parts of the xRisk story are speculative, so are many of the purported benefits from AI research. I guess Dario is saying 'it could' and not it will, but for me if you want to "improve efficiency throughout government" you'll need political solutions, not technical ones.

Point 2) is the 'but China' response to AI Safety. I'm not an expert in US foreign policy strategy (funny how everyone is these days), but I'd note this response only works if you view the path to increasing capability as straightforward. It also doesn't work, in my mind, if you think there's a high chance of xRisk. Just because someone else might ignite the atmosphere, doesn't mean you should too. I'd also note that Dario doesn't sound nearly as confident making this statement as he did talking to it with Dwarkesh recently.

Point 3) makes sense if you think the value of the benefits massively outweighs the harms, so that you solve the harms as you reap the benefits. But if those harms outweigh the benefits, or you incure a substantial "risk of ruin", then being at the frontier and expanding it further unilaterally makes less sense to me.

I guess I'd want the CEOs and those with power in these companies to actually be put under the scrutiny in the political sphere which they deserve. These are important and consequential issues we're talking about, and I just get the vibe that the 'kid gloves' need to come off a bit in turns of oversight and scrutiny/scepticism.

Yeah, I think the real reason is we think we're safer than OpenAI (and possibly some wanting-power but that mostly doesn't explain their behavior).

I haven't thought about this a lot, but I don't see big tech companies working with existing frontier AI players as necessarily a bad thing for race dynamics (compared to the counterfactual). It seems better than them funding or poaching talent to create a viable competitor that may not care as much about risk - I'd guess the question is how likely we'd expect them to be successful in doing so (given that Amazon is not exactly at the frontier now)?

From what I understand, Amazon does not get a board seat for this investment. Figured that should be highlighted. Seems like Amazon just gets to use Anthropic’s models and maybe make back their investment later on. Am I understanding this correctly? 

As part of the investment, Amazon will take a minority stake in Anthropic. Our corporate governance structure remains unchanged, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy. As outlined in this policy, we will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems.

I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.

It seems pretty clear that Amazon's intent is to have state of the art AI backing Alexa. That alone would not be particularly concerning. The problem would be if Amazon has some leverage to force Anthropic to accelerate capabilities research and neglect safety - which is certainly possible, but it seems like Anthropic wants to avoid it by keeping Amazon as a minority investor and maintaining the existing governance structure.

Judging by the example of Microsoft owning a minority stake in OpenAI (and the subsequent rush to release Bing's Sydney/GPT-4), that's not exactly comforting.

I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.

Um, conditional on any AI labs being in a race in what way are Anthropic not already racing?

Anthropic is small compared with Google and OpenAI+Microsoft.

I would, however, not downplay their talent density.

Ah, I thought you were implying that Anthropic weren't already racing when you were actually pointing at Amazon (a major company) joining the race. I agree that Anthropic is not a "major" company.

It seems pretty overdetermined to me that Amazon and Apple will join either join the race by acquiring a company or by reconfiguring teams/hiring. I'm a bit confused about whether I want it to happen now, or later. I'd guess later.

Curated and popular this week
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
 ·  · 1m read
 · 
LewisBollard
 ·  · 5m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It’s easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and your fellow advocates and funders played in these wins. I’m inspired by what you’ve achieved. I hope you will be too. 1. About Cluckin’ Time. Over 1,000 companies globally have now fulfilled their pledges to go cage-free. McDonald’s implemented its pledge in the US and Canada two years ahead of schedule, sparing seven million hens from cages. Subway implemented its pledge in Europe, the Middle East, Oceania, and Indonesia. Yum Brands, owner of KFC and Pizza Hut, reported that for 25,000 of its restaurants it is now 90% cage-free. These are not cheap changes: one UK retailer, Lidl, recently invested £1 billion just to transition part of its egg supply chain to free-range. 2. The Egg-sodus: Cracking Open Cages. In five of Europe’s seven biggest egg markets — France, Germany, Italy, the Netherlands, and the UK — at least two-thirds of hens are now cage-free. In the US, about 40% of hens are — up from a mere 6% a decade ago. In Brazil, where large-scale cage-free production didn’t exist a decade ago, about 15% of hens are now cage-free. And in Japan, where it still barely exists, the nation’s largest egg buyer, Kewpi