B

Buck

CEO @ Redwood Research
6594 karmaJoined Working (6-15 years)Berkeley, CA, USA

Comments
328

As it says in the subtitle of the graph, it's the length of task at which models have a 50% success rate.

Buck
6
0
1
50% agree

I think increasing the value of good futures is probably higher importance, but much less tractable

I think you're maybe overstating how much more promising grad students are than undergrads for short-term technical impact. Historically, people without much experience in AI safety have often produced some of the best work. And it sounds like you're mostly optimizing for people who can be in a position to make big contributions within two years; I think that undergrads will often look more promising than grad students given that time window.

I agree with you that people seem to somewhat overrate getting jobs in AI companies.

However, I do think there's good work to do inside AI companies. Currently, a lot of the quality-adjusted safety research happens inside AI companies. And see here for my rough argument that it's valuable to have safety-minded people inside AI companies at the point where they develop catastrophically dangerous AI.

Buck
38
10
6

Tentative implications:

  • People outside of labs are less likely to have access to the very best models and will have less awareness of where the state of the art is.
  • Warning shots are somewhat less likely as highly-advanced models may never be deployed externally.
  • We should expect to know less about where we’re at in terms of AI progress.
  • Working at labs is perhaps more important than ever to improve safety and researchers outside of labs may have little ability to contribute meaningfully.
  • Whistleblowing and reporting requirements could become more important as without them government would have little ability to regulate frontier AI.
  • Any regulation based solely on deployment (which has been quite common) should be adjusted to take into account that the most dangerous models may be used internally long before they're deployed. 

For what it's worth, I think that the last year was an update against many of these claims. Open source models currently seem to be closer to state of the art than they did a year ago or two years ago. Currently, researchers at labs seem mostly in worse positions to do research than researchers outside labs.

I very much agree that regulations should cover internal deployment, though, and I've been discussing risks from internal deployment for years.

Buck
30
7
1
5

Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.

Buck
11
2
5
1

When we were talking about this in 2012 we called it the "poor meat-eater problem", which I think is clearer.

I think this is a very good use of time and encourage people to do it.

Load more