CS student at the University of Southern California. Previously worked for three years as a data scientist at a fintech startup. Before that, four months on a work trial at AI Impacts. Currently working with Professor Lionel Levine on language model safety research.
I think it's noteworthy that surveys from 2016, 2019, and 2022 have all found roughly similar timelines to AGI (50% by ~2060) for the population of published ML researchers. On the other hand, the EA and AI safety communities seem much more focused on short timelines than they were seven years ago (though I don't have a source on that).
The adversarial Turing test seems like an odd definition to forecast on. Nuno's linked blogpost makes one side of the argument well: There could be ways to identify an AI as different from a human long after AI becomes economically transformative or capable of taking over the world. On the other side, AI that passes an adversarial Turing test could still fail to have economic impact (perhaps because of regulation, or maybe it's too expensive to replace human labor) or pose a meaningful existential risk (because it's not goal directed, misaligned, or capable of overpowering humanity).
I'd be more interested in your forecasts on a few other operationalizations of AI timelines:
Your thinking on these questions has been pretty persuasive to me, especially Nuno's recent blog and Eli Lifland's writeup of thinking through the full case. It's nice to get a perspective that's just a bit outside of the constant AI hype bubble. But these forecasts just felt a bit less informative than they could otherwise be, driven by edge cases around the definition. Curious if you would disagree with the importance of those edge cases, or think other forecasting targets have important flaws.
OpEds in NYT and WaPo about threats to discourse and democracy from ChatGPT. Both cite your example, though don’t link your paper perhaps from infohazard concerns. Looks like your concerns are gaining traction.
I wonder if a degree of randomization would help. Instead of showing the top 10 posts on the front page, show a new sample of the top 50 to each user. Then the bonus given to new posts could shrink, and there would be more nudges to continue engaging with something over the course of a week or month.
I spent about an hour today trying to convince a friend that works in private equity that OpenAI is undervalued at $30B. I pitched him on short AI timelines and transformative growth, and he didn’t disagree with those arguments directly. He mostly questioned whether OpenAI would reap the benefits of short timelines. A few of the points:
IMO these are boring economic arguments that don’t refute the core thesis of short timelines or AI risk. OpenAI is getting a similar evaluation to Grammarly, which also sells an LLM product, but with worse tech and better marketing. It’s being evaluated on short term revenue prospects more than considerations about TAI timelines.
Also ML Safety Scholars: https://course.mlsafety.org/
And probably a course in Deep Learning where you write code in Pytorch
That’s a good argument, I think I agree.
What kinds of amendments to lobbying disclosure laws could be made? Is it practical to require disclosure of LLM-use in lobbying when detection is not yet reliable? Is disclosure even enough, or is it necessary to ban LLM lobbying entirely? I assume this would need to be a new law passed by Congress rather than an FEC rule — would you know if there is or has been any consideration of similar legislation?
Thanks for sharing. I have a friend who's in the Marines and loves his animal meat, but he found this funny and persuasive of the claim that lobsters can feel pain.
Very interesting stuff. I'd be wary of the Streisand Effect, that calling attention to the danger of AI-powered corporate lobbying might cause someone to build AI for corporate lobbying. Your third section clearly explains the risks of such a plan, but might not be heeded by those excited by AI lobbying.
I trust your judgement on this, but I think the Community section might be more fitting. This post is mainly about whether FTX money that was supposedly being spent to support pandemic preparedness was instead going to candidates that would further enrich FTX. Plenty of people (myself included) have lowered the visibility of Community posts on their frontpage, but those who are interested in SBF's corruption would probably want this on their frontpage. The real discussion here is about SBF's potential dishonesty, not about any of the four topics outlined in the policy: