The article is here (note that the Washington Post is paywalled[1]). The headline[2] is "How elite schools like Stanford became fixated on the AI apocalypse," subtitled "A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project." It's by Nitasha Tiku.
Notes on the article:
- The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford.
- It also talks about:
- EA (including recent scandals)
- Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
- Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
- It features comments from:
- Paul Edwards, a Stanford University fellow "who spent decades studying nuclear war and climate change, considers himself 'an apocalypse guy'" and who developed a freshman course on human extinction —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He's also a faculty co-director of SERI.)
- Gabriel Mukobi, a Stanford graduate who organized a campus AI safety group
- And in brief:
- Timnit Gebru (very briefly)
- Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who's the other faculty co-director of SERI)
- Open Philanthropy spokesperson Mike Levine (pretty briefly)
I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.
I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.
A few related thoughts we wanted to share:
We also want to express that we are very excited by the work of groups and organizers we’ve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and it’s encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that today’s undergraduates and graduate students will become tomorrow’s researchers, governance experts, and advocates for safer systems.
For a few examples of what students and academics in the article are working on, we recommend:
I ask ChatGPT to analyse the text's sentiment.
What is the text's sentiment re effective altruism?
The sentiment towards effective altruism (EA) in the provided text is primarily skeptical and critical. The text associates effective altruism with the AI safety movement, describing them as intertwined. It suggests that EA has been recruiting ambitious young people on elite college campuses, but questions the movement's focus, methods, and associations.
Here are a few points from the text that highlight the skeptical sentiment toward effective altruism:
Overall, the sentiment toward effective altruism in the text is that it is a movement with questionable focus, controversial associations, and methods that may be seen as cult-like or ungrounded.
What is the text's sentiment re the field of AI safety?
The sentiment towards the field of AI safety in the provided text is largely mixed with a tinge of skepticism. While the text acknowledges the importance and legitimacy of AI safety as a field, it appears to criticize its association with effective altruism and the extreme scenarios some proponents focus on.
Here are some points from the text that illustrate the sentiment towards AI safety:
In summary, the text portrays AI safety as an important and legitimate field, but questions the focus on highly speculative scenarios and criticizes its close association with the effective altruism movement, which is portrayed skeptically. The text also seems to suggest a need for a more grounded and realistic approach within the AI safety community.
Do a sentiment analysis of the text
The text provided can be broken down into various elements in terms of sentiment:
Neutral-Positive sentiment towards Paul Edwards and Steve Luby: The text introduces Paul Edwards and Steve Luby in a neutral light and showcases their dedication and positive intentions in educating young minds about potential threats to humanity.
Neutral sentiment regarding Artificial Intelligence (AI) risks: The text discusses AI risks and how the concept has gained traction over time. It doesn't take a side, rather presents the evolving perception of AI.
Positive sentiment towards efforts to counter rogue AI: The text illustrates a positive sentiment towards various organizations, philanthropists, and labs (such as DeepMind, OpenAI, and Anthropic) that are taking steps to prevent AI-related catastrophes by promoting AI safety and developing benign AI systems.
Negative sentiment towards the AI safety movement by critics: The text highlights criticism of the AI safety movement as unscientific and potentially distracting from other important issues. The movement is presented as potentially exaggerative and is likened to a religion by critics.
Positive sentiment towards student involvement in AI safety: The article shows positive sentiment towards the involvement of students, especially Gabriel Mukobi, in AI safety, and how they are becoming increasingly aware and motivated to act against the potential existential risks associated with AI.
Negative sentiment regarding controversial figures and ideas associated with the AI safety movement: The text shows a negative sentiment towards certain figures and ideas associated with the AI safety movement, including Effective Altruism (EA) and past controversial statements by Nick Bostrom.
Neutral-Positive sentiment towards the increasing acknowledgment of AI safety: Towards the end, the article discusses how AI safety is becoming a topic of growing interest and concern among academia and industry. This is presented in a neutral to positive light, indicating the recognition of AI safety as an area that requires attention.
Mixed sentiment among educators: The article ends with mixed sentiments among educators like Luby, who seems to be in agreement with the concerns, and Edwards, who acknowledges AI safety but still sees other issues such as climate change as more immediate threats.
In summary, the text provides a balanced view, containing neutral, positive, and negative sentiments regarding the topic of AI safety and its growing prominence in academia and the tech industry. There is a general positive sentiment towards proactive measures and student involvement, but also a significant negative sentiment related to the criticisms and controversial aspects of the AI safety movement.
Minor note, but
I think this is true, but also a distraction. When somebody critiques inaccurate/clickbaity/bombastic/generally bad headlines, what they usually care about is the low quality of the headline qua headline, not the internals of how some news corporation allocates their job duties.
Seems "within tolerance". Like I guess I would nitpick some stuff, but does it seem egregiously unfair? No.
And in terms of tone, it's pretty supportive.
That's not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are "focused on a purely hypothetical risk". Except the areas Edwards is concerned about ("autonomous weapons that target and kill without human intervention") are also "purely hypothetical", as is anything else wiping out humanity.
I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from "unsupportive" to "supportive"? Example word choices and phrasings that read as unsupportive to me: "enthralled", emphasizing that the outcome is "theoretical", the fixed-pie framing of "prioritize the fight against rogue AI over other threats", emphasizing Karnofsky's conflicts of interest in response to a blog post that pre-dates those conflicts, bringing up the Bostrom controversy that isn't really relevant to the article, and "dorm-room musings accepted at face value in the forums". But it does end on a positive note, with Luby (the alternative expert) coming around, Edwards in between, and an official class on it at Stanford.
Overall, instead of thinking of the article as trying to be supportive or not, I think it's mostly trying to promote controversy?