The article is here (note that the Washington Post is paywalled[1]). The headline[2] is "How elite schools like Stanford became fixated on the AI apocalypse," subtitled "A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project." It's by Nitasha Tiku. 

Notes on the article: 

  • The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford. 
  • It also talks about: 
    • EA (including recent scandals)
    • Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
    • Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
  • It features comments from: 
    • Paul Edwards, a Stanford University fellow "who spent decades studying nuclear war and climate change, considers himself 'an apocalypse guy'" and who developed a freshman course on human extinction  —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He's also a faculty co-director of SERI.)
    • Gabriel Mukobi, a Stanford graduate who organized a campus AI safety group
    • And in brief: 
      • Timnit Gebru (very briefly)
      • Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who's the other faculty co-director of SERI)
      • Open Philanthropy spokesperson Mike Levine (pretty briefly)

I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.

  1. ^

    Up to some number of free articles per month

  2. ^

    My understanding is that journalists don't generally choose their headlines. Someone should correct me in the comments if this is wrong! 

39

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 7:48 AM

I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.

 

A few related thoughts we wanted to share:

  • The figure “nearly half a billion dollars” accurately describes our total spending in AI over the last eight years, if you think of EA and existential risk work as being somewhat but not entirely focused on AI — which seems fair. However, only a small fraction of that funding (under 5%) went toward student-oriented activities like groups and courses. 
  • The article cites the figure “as much as $80,000 a year” for what student leaders might receive. This figure is prorated: a student who takes a gap year to work full-time on organizing, in an expensive city, might get up to $80,000, but most of our grants to organizers are for much lower amounts. 
  • Likewise, while the figure “up to $100,000” is mentioned for group expenses, nearly all of the groups we fund have much lower budgets.
    • This rundown, shared with us by one American organizer, is a good example of a typical budget: ~$2800 for food at events, ~$1500 for books and other reading material, ~$200 for digital services (e.g. Zoom), and ~$3000 for the group’s annual retreat.
  • Regarding the idea, mentioned in the story, that AI safety is a distraction from present-day harms like algorithmic bias:
    • As Mike said in the piece, we think present-day harms deserve a robust response.
    • But just as concerns about catastrophic climate change scenarios like large-scale sea level rise are not seen as distractions from present-day climate harms like adverse weather events, we don’t think concerns about catastrophic AI harms distract from concerns about present-day harms. 
    • In fact, they can be mutually reinforcing. Harms like algorithmic bias are caused in part by the difficulty of getting AI systems to behave as their designers intend, which is the same thing that could lead to more extreme harms. Some of the same guardrails may work for everything on that continuum. In that sense, researchers and advocates working on AI safety and AI ethics are pulling in the same direction: toward policies and guardrails that protect society from these novel and growing threats.

 

We also want to express that we are very excited by the work of groups and organizers we’ve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and it’s encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that today’s undergraduates and graduate students will become tomorrow’s researchers, governance experts, and advocates for safer systems.

For a few examples of what students and academics in the article are working on, we recommend:

I ask ChatGPT to analyse the text's sentiment. 

What is the text's sentiment re effective altruism?

The sentiment towards effective altruism (EA) in the provided text is primarily skeptical and critical. The text associates effective altruism with the AI safety movement, describing them as intertwined. It suggests that EA has been recruiting ambitious young people on elite college campuses, but questions the movement's focus, methods, and associations.

Here are a few points from the text that highlight the skeptical sentiment toward effective altruism:

  1. The text mentions that EA initially focused on global poverty but then shifted its focus to rogue AI, with the implication that this shift may not be entirely grounded in current, practical concerns. This suggests a lack of consistency in the movement's goals.
  2. The text notes that extreme practitioners within EA began promoting an idea called “longtermism,” which is portrayed as prioritizing the lives of hypothetical future beings over present-day suffering. This is presented as a rather abstract and potentially disconnected concern.
  3. The text mentions that EA is associated with controversial figures and ideas, including a scandal involving one of its largest donors, Bankman-Fried, and a backlash against Nick Bostrom, an influential figure in EA, for controversial past writings. This suggests that the movement is connected to individuals and ideas that have faced criticism.
  4. The phrase “EA Kool-Aid” is used in the text, a colloquial term that implies that EA might be likened to a cult where followers are expected to accept beliefs without critical examination.

Overall, the sentiment toward effective altruism in the text is that it is a movement with questionable focus, controversial associations, and methods that may be seen as cult-like or ungrounded.

What is the text's sentiment re the field of AI safety? 

The sentiment towards the field of AI safety in the provided text is largely mixed with a tinge of skepticism. While the text acknowledges the importance and legitimacy of AI safety as a field, it appears to criticize its association with effective altruism and the extreme scenarios some proponents focus on.

Here are some points from the text that illustrate the sentiment towards AI safety:

  1. The text acknowledges that AI safety is a “legitimate field”, indicating that there is recognition of its relevance and importance.
  2. The description of the “superintelligent AI uprising” being an “extremely improbable scenario” reflects skepticism towards certain extreme concerns within the field. It suggests that there may be an exaggerated focus on unlikely outcomes.
  3. The association of AI safety with effective altruism in the text might imply that some aspects of AI safety are criticized for being too abstract or speculative, as effective altruism is portrayed with skepticism, particularly when it shifted focus to long-term, more abstract concerns like rogue AI.
  4. The mentioning of AI safety researchers not taking themselves too seriously in the text suggests that there may be an underlying criticism or questioning of the magnitude of certain claims or concerns within the field.

In summary, the text portrays AI safety as an important and legitimate field, but questions the focus on highly speculative scenarios and criticizes its close association with the effective altruism movement, which is portrayed skeptically. The text also seems to suggest a need for a more grounded and realistic approach within the AI safety community.

Do a sentiment analysis of the text

The text provided can be broken down into various elements in terms of sentiment:

Neutral-Positive sentiment towards Paul Edwards and Steve Luby: The text introduces Paul Edwards and Steve Luby in a neutral light and showcases their dedication and positive intentions in educating young minds about potential threats to humanity.

Neutral sentiment regarding Artificial Intelligence (AI) risks: The text discusses AI risks and how the concept has gained traction over time. It doesn't take a side, rather presents the evolving perception of AI.

Positive sentiment towards efforts to counter rogue AI: The text illustrates a positive sentiment towards various organizations, philanthropists, and labs (such as DeepMind, OpenAI, and Anthropic) that are taking steps to prevent AI-related catastrophes by promoting AI safety and developing benign AI systems.

Negative sentiment towards the AI safety movement by critics: The text highlights criticism of the AI safety movement as unscientific and potentially distracting from other important issues. The movement is presented as potentially exaggerative and is likened to a religion by critics.

Positive sentiment towards student involvement in AI safety: The article shows positive sentiment towards the involvement of students, especially Gabriel Mukobi, in AI safety, and how they are becoming increasingly aware and motivated to act against the potential existential risks associated with AI.

Negative sentiment regarding controversial figures and ideas associated with the AI safety movement: The text shows a negative sentiment towards certain figures and ideas associated with the AI safety movement, including Effective Altruism (EA) and past controversial statements by Nick Bostrom.

Neutral-Positive sentiment towards the increasing acknowledgment of AI safety: Towards the end, the article discusses how AI safety is becoming a topic of growing interest and concern among academia and industry. This is presented in a neutral to positive light, indicating the recognition of AI safety as an area that requires attention.

Mixed sentiment among educators: The article ends with mixed sentiments among educators like Luby, who seems to be in agreement with the concerns, and Edwards, who acknowledges AI safety but still sees other issues such as climate change as more immediate threats.

In summary, the text provides a balanced view, containing neutral, positive, and negative sentiments regarding the topic of AI safety and its growing prominence in academia and the tech industry. There is a general positive sentiment towards proactive measures and student involvement, but also a significant negative sentiment related to the criticisms and controversial aspects of the AI safety movement.

Minor note, but

My understanding is that journalists don't generally choose their headlines. Someone should correct me in the comments if this is wrong! 

I think this is true, but also a distraction. When somebody critiques inaccurate/clickbaity/bombastic/generally bad headlines, what they usually care about is the low quality of the headline qua headline, not the internals of how some news corporation allocates their job duties.

Seems "within tolerance". Like I guess I would nitpick some stuff, but does it seem egregiously unfair? No. 

And in terms of tone, it's pretty supportive.

in terms of tone, it's pretty supportive

That's not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are "focused on a purely hypothetical risk". Except the areas Edwards is concerned about ("autonomous weapons that target and kill without human intervention") are also "purely hypothetical", as is anything else wiping out humanity.

I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from "unsupportive" to "supportive"? Example word choices and phrasings that read as unsupportive to me: "enthralled", emphasizing that the outcome is "theoretical", the fixed-pie framing of "prioritize the fight against rogue AI over other threats", emphasizing Karnofsky's conflicts of interest in response to a blog post that pre-dates those conflicts, bringing up the Bostrom controversy that isn't really relevant to the article, and "dorm-room musings accepted at face value in the forums". But it does end on a positive note, with Luby (the alternative expert) coming around, Edwards in between, and an official class on it at Stanford.

Overall, instead of thinking of the article as trying to be supportive or not, I think it's mostly trying to promote controversy?

More from Lizka
Curated and popular this week
Relevant opportunities