I'm interested in how many 2021 $s you'd think it's rational for EA be willing to trade (or perhaps the equivalent in human capital) against 0.01% (or 1 basis point) of existential risk.
This question is potentially extremely decision-relevant for EA orgs doing prioritization, like Rethink Priorities. For example, if we assign $X to preventing 0.01% of existential risk, and we take Toby Ord's figures on existential risk (pg. 167, The Precipice) on face value, then we should not prioritize asteroid risk (~1/1,000,000 risk this century), if all realistic interventions we could think of costs >>1% of $X, or prioritize climate change (~1/1,000 risk this century) if realistic interventions costs >>$10X, at least on direct longtermism grounds (though there might still be neartermist or instrumental reasons for doing...
The classic definition comes from Bostrom:
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
But this definition, while poetic and gesturing at something real, is more than a bit vague, and many people are unhappy with it, judging from the long chain of clarifying questions in my linked question. So I'm interested in proposed community alternatives that the EA community and/or leading longtermist or xrisk researchers may wish to adopt instead.
Alternative definitions should ideally be precise, clear, unambiguous, and hopefully not too long.
Inspired by Yonatan's post here.
I'm very much early-career myself (finished undergrad 2019). I've interned at Google and Uber (both South Bay) and worked at Citadel (Chicago), currently at Scale AI (San Francisco). My EA experiences include facilitating UCLA EA's Arete Fellowship (Fall '20), Stanford's AI Safety reading group (Fall '20), and being a Tianxia Fellow (2021). My mentorship experience include at least six 1:1 college mentees and five years of sporadic 1:1 tutoring.
I enjoy mentoring and meeting more EAs in this way seems like fun!
If you're new to the EA Forum, consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Forum bio.)
If you have something to share that doesn't feel like a full post, add it here!
(You can also create a Shortform post.)
Open threads are also a place to share good news, big or small. See this post for ideas.
This is a linkpost for https://sashachapin.substack.com/p/your-intelligent-conscientious-in
I really liked the piece. It resonated with my experiences in EA. I don't know that I agree with the mechanisms Sasha proposes, but I buy a lot of the observations they're meant to explain.
I asked Sasha for his permission to post this (and heavily quote it). He said that he hopes it comes off as more than a criticism of EA/rationality specifically--it's more a "general nerd social patterns" thing. I only quoted parts very related to EA, which doesn't help assuage his worry :(
There's more behind the link :)
So, I’ve noticed that a significant number of my friends in the Rationalist and Effective Altruist communities seem to stumble into pits of despair, generally when they structure their lives too rigidly around
This post discusses multiple issues relating to the way the EA movement is perceived (ranging from common misconceptions to unjustified strong opinions against EA) and suggests alternatives to the ways we describe EA.
Since I don’t have the resources to quantify this problem, I rely on my personal experience as a community builder and that of many other community builders and explain the rationale behind my suggestions.
Around 2013, a couple of mass media articles about EA (1,2,3) - specifically about Earning To Give - were published. These articles clearly missed most of the nuances behind the idea of Earning To Give, and heavily misrepresented the idea.
In light of such events, the EA movement at that time faced a critical question:
Should we stay away from mass media?
The answer the...
TL;DR: Please comment with pain points you have or know about that might be solved by software developers.
Have a low bar: If it's related to EA or LessWrong, and someone would probably pay $100 to solve it, please write it. For example, maybe there's an annoying task you'd like to automate? Or a Twitter bot you wish existed?
Why I'm asking: I wonder if there are existing needs in our community, but there's no easy way to surface them. I hope that commenting here will be easy and inviting enough to bridge some of that gap. On the other side I think there are software developers who might help.
Inspiration: Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits by Ozzie Gooen, EA Communication Project Ideas by Ben West.