B

Buck

CEO @ Redwood Research
7063 karmaJoined Working (6-15 years)Berkeley, CA, USA

Comments
351

I agree the survey underestimates the variance in AI timelines and risk.

The AI Futures, which is know for AI 2027, had super broad timelines for artificial superintelligence (ASI) timelines on January 26. The difference between the 90th and 10th percentile was 168 years for Daniel Kokotajlo (2027 to 2195), and 137 years for Eli Lifland (2028 to 2165).

Note that that is a different kind of distribution (one person's beliefs) than the one reported here (many people's medians)

@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We'd love to hear topics that you'd like us to discuss. (The questions people proposed last time are here, for reference.) We're most likely to discuss issues related to AI, but a broad set of topics other than "preventing AI takeover" are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.

There are so many other risk assessment techniques out there, for reference ISO31010 lists 30 of them (see here) and they're far from exhaustive.

Almost nothing on the list you've linked is an alternative approach to the same problem safety cases try to solve. E.g. "brainstorming" is obviously not a competitor to safety cases. And safety cases are not even an item in that list!

I think EAs are put way too much effort into thinking about safety cases compared to thinking about reducing risks on the margin in cases where risk is much higher (and willingness-to-pay for safety is much lower), because it seems unlikely that willingness-to-pay will be high enough that we'll have low risk at the relevant point. See e.g. here.

Buck
19
5
1
1

There's a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.

People in this cluster mostly don't post on the EA Forum for a variety of reasons:

  • Many users here don't seem very well-informed.
  • Lots of users here disagree with me on some of the opinions about AI that I stated above. Obviously it's totally reasonable for people to disagree on those points, at least before people tell them arguments about them. But it usually doesn't feel worth my time to argue about those points. I want to spend much more of my time discussing the implications of these basic beliefs than arguing about their probabilities. LessWrong is a much better place for this.
  • The culture here seems pretty toxic. I don't really feel welcome here. I expect people to treat me with hostility as a result of being moderately influential as an AI safety researcher and executive.

To be clear, I think it's a shame that the EA Forum isn't a better place for people like me to post and comment.

You can check for yourself that the Bay Area EAs don't really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.

Anecdotally, the EA forum skews [...] more Bay Area.

For what it's worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it's very rarely worthwhile to post here (which is why they often post on LessWrong instead).

This is not an obscure topic. It's been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.

I like Claude's response a lot more than you do. I'm not sure why. I agree that it's a lot less informative than your response.

(The post including "This demographic has historically been disconnected from social impact" made me much less inclined to want this person to stick around.)

I feel like Claude's answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.

I'm glad to hear you are inspired by EA's utilitarian approach to maximizing social impact; I too am inspired by it and I have very much appreciated being involved with EA for the last decade.

I think you should probably ask questions as basic as this to AIs before asking people to talk to you about them. Here's what Claude responded with.

The observation about EA's demographic skew is accurate and widely acknowledged within the community. A few points worth making:

On the historical pattern: The claim that white, male, tech-focused demographics are "historically disconnected from social impact" isn't quite right - these demographics have been heavily involved in philanthropy and social reform movements throughout history (from industrialist philanthropy to the civil rights movement's diverse coalition). But the observation that EA specifically has a particular demographic concentration is valid.

Why this pattern exists: Several factors likely contribute:

  • EA grew out of academic philosophy and rationalist communities that had their own demographic patterns
  • The movement's early focus areas (AI safety, global poverty, animal welfare) and analytical approach appealed to certain demographics more than others
  • Network effects and social clustering naturally amplified initial patterns
  • Geographic concentration in places like the Bay Area and Oxford

On diversity efforts: EA organizations have made various attempts to broaden participation, though with mixed results. There are efforts around:

  • Outreach to different universities and regions
  • Scholarships and programs aimed at underrepresented groups
  • Discussion of how framing and culture might inadvertently exclude some people

The harder question: There's ongoing debate about whether demographic diversity is primarily valuable instrumentally (does it improve EA's thinking and impact?) or intrinsically (is it important regardless of instrumental benefits?). Different people in EA would answer this differently, and it connects to deeper questions about EA's core commitments and priorities.

Worth noting that some core EA principles (like cause impartiality and willingness to update beliefs based on evidence) might themselves be culturally specific in ways the movement doesn't always recognize.

I think that this post summarizes Will's position extremely inaccurately and unfairly.

Load more