Pronouns: she/her or they/them.
I got interested in EA back before it was called EA, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where EA can fit into my life these days and what it means to me.
I don't put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher.
Then what was the point of quoting Sam Altman, Dario Amodei, and Demis Hassabis' timelines at the beginning of your article?
The section of the post "When do the 'experts' expect AGI to arrive?" suffers from a similar problem: downplaying expert opinion when it challenges the thesis and playing up expert opinion when it supports the thesis. What is the content and structure of this argument? It just feels like a restatement of your personal opinion.
I also wish people would stop citing Metaculus for anything. Metaculus is not a real prediction market. You can't make money on Metaculus. You might as well just survey people on r/singularity.
Oh, you’re right, I forgot about that. But then the body of the post makes a non-joking claim that agrees with the joke:
It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles.
So, is the title a joke or not? Maybe it’s a joke-y reference but the point it’s trying to make is actually sincere and not a joke.
Extrapolating from these somewhat ambiguous, somewhat open to interpretation off the cuff comments to "a communications decision" just seems like jumping to conclusions.
And that’s disappointing because I was so excited to be mad!
People who believe the Singularity will happen before the Brisbane Olympics are way too mean to Yann LeCun. In this post, it feels to me like you’re quoting him to do a "gotcha" and not engaging with the real substance of his argument.
Do frontier AI models have a good understanding of things like causality, time, and the physics of everyday objects? Well, recently o3-mini told me that a natural disaster caused an economic downturn that happened a month before the disaster, so not really. No human would ever make that mistake.
Yann is doing frontier AI research and actually has sophisticated thinking about the limitations of current LLMs. He even has a research program aimed at eventually overcoming those limitations with new AI systems. I think his general point that LLMs do not understand a lot of things that virtually all adult humans understand is still correct. Understanding, to me, does not mean that the LLM can answer correctly one time, but that it gives correct answers reliably and does not routinely make ridiculous mistakes (like the one I mentioned).
I like the ARC-AGI-2 benchmark because it quantifies what frontier AI models lack. Ordinary humans off the street get an average of 60% and every task has been solved by at least two ordinary humans in two attempts or less. GPT-4.5 gets 0.0%, o3-mini gets 0.0%, and every model that’s been tested gets under 5%. It’s designed to be challenging yet achievable for near-term AI models.
Benchmarks that reward memorizing large quantities of text have their place, but those benchmarks do not measure general intelligence.
I agree 100bn in 2029 is harder to square, but I think that's in part because OpenAI thinks investors won't believe higher figures.
So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon?
Sam Altman’s most recent timeline is "thousands of days", which is so vague. 2,000 days (the minimum "thousands of days" could mean) is 5.5 years. 9,000 days (the point before you might think he would just say "ten thousand days") is 24.7 years. So, 5-25 years?
When you survey AI experts or superforecasters about AGI, you tend to get dramatically more conservative opinions. One survey of AI experts found that the median expert assigns only a 50% probability to AI automating all human jobs by 2116. A survey of superforecasters found the median superforecaster assigns a 50% probability to AGI being developed by 2081.
Dario Amodei also said on Dwarkesh Patel’s podcast 1 year and 8 months ago that we would have something that sure sounded a lot like AGI in 2-3 years and now, 1 year and 8 months later, it seems like he’s pushed back that timeline to 2-3 years from now. This is suspicious.
If you look at the history of Tesla and fully autonomous driving, there is an absurd situation in which Elon Musk has said pretty much every year from 2015 to 2025 that full autonomy is 1 year away or it will be solved "next year" or by the end of the current year. Based on this, I have a strong suspicion of tech CEOs updating their predictions so that an AI breakthrough is always the same amount of time away from the present moment, even as time progresses.
But also a disagree because it seems reasonable to be skittish around the subject ("AI Safety" broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm).
Doesn’t this amount to an argument that the leaders at Anthropic should say whatever they think sounds good, rather than what’s true?
This seems like a single slightly odd sentence by Daniela, nothing else.
I think you have a good point there. When people are speaking off the cuff (like in an interview setting) people often misspeak or express themselves unclearly. If I saw multiple instances of Daniela Amodei saying similar things, then it would look a lot more like just straight-up dishonesty. As is, it’s hard to tell.
Now that you’ve raised this point, the post of this title feels dishonest. The title "Anthropic is not being consistently candid about their connection to EA" suggests a pattern. The post does not show a pattern.
The Wired article doesn't say what exactly the question was. I doubt the question was "Is Anthropic an effective altruist company?".