@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We'd love to hear topics that you'd like us to discuss. (The questions people proposed last time are here, for reference.) We're most likely to discuss issues related to AI, but a broad set of topics other than "preventing AI takeover" are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.
There are so many other risk assessment techniques out there, for reference ISO31010 lists 30 of them (see here) and they're far from exhaustive.
Almost nothing on the list you've linked is an alternative approach to the same problem safety cases try to solve. E.g. "brainstorming" is obviously not a competitor to safety cases. And safety cases are not even an item in that list!
I think EAs are put way too much effort into thinking about safety cases compared to thinking about reducing risks on the margin in cases where risk is much higher (and willingness-to-pay for safety is much lower), because it seems unlikely that willingness-to-pay will be high enough that we'll have low risk at the relevant point. See e.g. here.
There's a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.
People in this cluster mostly don't post on the EA Forum for a variety of reasons:
To be clear, I think it's a shame that the EA Forum isn't a better place for people like me to post and comment.
You can check for yourself that the Bay Area EAs don't really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.
Anecdotally, the EA forum skews [...] more Bay Area.
For what it's worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it's very rarely worthwhile to post here (which is why they often post on LessWrong instead).
This is not an obscure topic. It's been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude's response a lot more than you do. I'm not sure why. I agree that it's a lot less informative than your response.
(The post including "This demographic has historically been disconnected from social impact" made me much less inclined to want this person to stick around.)
I'm glad to hear you are inspired by EA's utilitarian approach to maximizing social impact; I too am inspired by it and I have very much appreciated being involved with EA for the last decade.
I think you should probably ask questions as basic as this to AIs before asking people to talk to you about them. Here's what Claude responded with.
The observation about EA's demographic skew is accurate and widely acknowledged within the community. A few points worth making:
On the historical pattern: The claim that white, male, tech-focused demographics are "historically disconnected from social impact" isn't quite right - these demographics have been heavily involved in philanthropy and social reform movements throughout history (from industrialist philanthropy to the civil rights movement's diverse coalition). But the observation that EA specifically has a particular demographic concentration is valid.
Why this pattern exists: Several factors likely contribute:
- EA grew out of academic philosophy and rationalist communities that had their own demographic patterns
- The movement's early focus areas (AI safety, global poverty, animal welfare) and analytical approach appealed to certain demographics more than others
- Network effects and social clustering naturally amplified initial patterns
- Geographic concentration in places like the Bay Area and Oxford
On diversity efforts: EA organizations have made various attempts to broaden participation, though with mixed results. There are efforts around:
- Outreach to different universities and regions
- Scholarships and programs aimed at underrepresented groups
- Discussion of how framing and culture might inadvertently exclude some people
The harder question: There's ongoing debate about whether demographic diversity is primarily valuable instrumentally (does it improve EA's thinking and impact?) or intrinsically (is it important regardless of instrumental benefits?). Different people in EA would answer this differently, and it connects to deeper questions about EA's core commitments and priorities.
Worth noting that some core EA principles (like cause impartiality and willingness to update beliefs based on evidence) might themselves be culturally specific in ways the movement doesn't always recognize.
Note that that is a different kind of distribution (one person's beliefs) than the one reported here (many people's medians)