Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a "very serious issue," and lacking "any perspective at all."
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.