jskatt

Pursuing an undergraduate degree
144Joined May 2021

Comments
72

AI Watch attempted a headcount of AI Safety researchers, which found 160 notable researchers who have worked on AI Safety.

Where did you find the "160 notable researchers" part?

I last checked the AI Watch database a few weeks ago and it seemed very bad. E.g. missing Mark Xu, John Wentworth, Rebecca Gorman, Vivek Hebbar, and Quintin Pope, many of whom are MATS mentors! Also missing Conjecture, ARC, Apart Research, and GovAI as far as I can tell.

Given these flaws, it's strange  that the AI Watch database is part of the most visible AI safety intro material. The official EA intro (Ben Todd's comment suggests he wrote this?) says there are ~300 people working on AI safety; it cites a few sources including AI Watch. The 80k problem profile also estimates 300; as far as I can tell the only source is AI Watch.

Worth linking this video from Bentham's Bulldog, which critiques Sabine's video. It's nearly 90 minutes and sometimes optimizes for making fun of Sabine's video instead of giving a fair response, but it does have some good responses.

Will MacAskill wrote a Twitter thread about agreements + disagreements with Elon after Elon recommended WWOTF and said "this is a close match for my philosophy."

  1. What does it mean to buy "end time"? If an action results in a world with longer timelines, then what does it mean to say that the additional time comes "later"?
  2. What is "serial alignment research"? I'm struggling to distinguish it from "alignment research."
  3. Can you clarify the culture change you want to see? We should think of buying time as "better" than "(traditional) technical AI safety research and (traditional) community-building"?

Less-important comments:

  • Can you be more specific about "ODA+"? Does it include Meta and Google Brain, or only the most safety-conscious labs?
  • I'm confused why field building isn't listed as one of the key benefits other than buying time. Publishing more work like the goal misgeneralization paper would make field building more successful.
  • In the graph ("Figure 1"), I'm parsing "technical alignment" as "technical research on the core challenges of alignment that avoids shortening timelines," because the impact of talented researchers depends heavily on the projects they pursue.
  • PaLM would have been just as fast if they chose not to publish the PaLM paper. (The sentence about PaLM is confusing.)

Your impact is  if each of the   alignment researchers contributes exactly  alignment work per unit of time.

Can you be more specific about "the bottleneck for buying time is social influence"?

Totally agree that intent alignment does basically nothing to solve misuse risks. To weigh the importance of misuse risks, we should consider (a) how quickly AI to AGI happens,  (b) whether the first group to deploy AGI will use it to prevent other groups from developing AGI, (c) how quickly AGI to superintelligence happens, (d) how widely accessible AI will be to the public as it develops, (e) the destructive power of AI misuse at various stages of AI capability, etc.

I increasingly get the sense that AI alignment as a field is defining itself so narrowly...

Paul Christiano's 2019 EAG-SF talk highlights how there are so many other important subproblems within "make AI go well" besides intent alignment. Of course, Paul doesn't speak for "AI alignment as a field."

If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, you might have a very tough time succeeding in these career paths as outlined by 80,000 Hours. 

As a counterpoint, I think some of the most impactful roles are extremely neglected, so much so that even 80k might not have an article about them. A few AI safety field building projects come to mind when I backchain on preventing an AI catastrophe. And I think these projects require specialized skillsets more than they require the general competence that gets someone into Google / top half of Oxford.

Load More