We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post

When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed). 

However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own. 

If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner. 

  1. ^

     ‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.

  2. ^

     ‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)

  3. ^

     Through means other than extinction risk reduction.  

36

0
0

Reactions

0
0

Have you voted yet?

This post is part of Existential Choices Debate Week. Click and drag your avatar to vote on the debate statement. Votes are non-anonymous, and you can change your mind.
On the margin1, it is better to work on reducing the chance of our2 extinction, than increasing the value of futures where we survive3
C
C
F
JN
KY
M
M
M
NK
N
PB
P
Y
DS
I
JR
MZ
P
SK
U
B
B
I
OCB
S
TG
ZM
AA
CB
DM
HH
M
V
VH
A
C
EN
JS
MB
NKD
PS
R
WC
WM
BM
CH
G
GF
M
M
PY
AM
AWE
B
F
J
M
M
MR
PH
P
C
F
JV
K
NR
S
T
EG
J
SDI
WH
J
KS
M
RR
S
T
T
A
B
C
C
GA
L
L
MJ
ND
N
PW
R
SC
A
C
C
EA
H
JW
G
J
JD
JC
KL
K
LW
SP
YY
DR
G
JS
JG
PC
VK
A
DM
DM
FK
L
P
RB
SB
TA
T
EPC
JF
KL
TS
T
BG
GA
ID
JB
P
TVD
W
AWZ
C
CL
E
EF
HC
T
F
L
Q
W
B
CC
CJ
E
HY
JHN
M
O
R
V
AB
D
J
PB
SH
SM
Z
A
B
C
GM
JDR
L
N
PB
TS
AZ
C
DH
G
JH
J
LR
MH
M
OS
Q
W
JM
KM
M
JJ
S
AM
C
MB
T
TSL
CS
HP
SS
GL
J
K
BA
CK
H
M
T
Disagree
Agree
Comments142


Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
David Hammerle
1
0
0
57% agree

Yes, because it seems like extinction or near-extinction is a major possibility.

Peter Wildeford
16
5
1
43% disagree

I don't think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn't come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.

Matt Boyd
3
1
0
79% agree

More tractable, necessary precondition 

MarkLee
6
2
0
57% disagree

Avoiding extinction seems bad to me if we never get our shit together morally-civilizationally, and good otherwise. Conditional on survival, I'm not sure of the likelihood of us getting our shit together to be a force of good in the universe, mostly because I'm uncertain how AGI will play out.

Hawk.Yang 🔸
6
1
2
36% agree

I think "increasing value of futures where we survive" is broad enough that plenty of non-EA stuff like just foreign aid or governance reform stuff generally would count and X-Risk stuff is very specific and niche.

9
Maxime Riché 🔸
There is a misunderstanding: "Increasing value of futures where we survive" is an X-risks reduction intervention. See the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ which clarifies that the debate is between Extinction-Risks vs Alignment-Risks (AKA increasing the value of future) which both are X-risks. The debate is not between X-risks and Alignment-Risks. One of the most impactful way to "increasing value of futures where we survive" is to work on AI governance and technical AI alignment.

Reducing x risk is much less tractable than EAs think

Flo 🔸
1
0
0
29% agree

it's easy to be wrong about what futures are more valuable, but it's pretty clear that extinction closes off all future options of creating value.

Jelle Donders
3
0
0
14% disagree

I'm leaning to disagree because existential risks are a lot broader than extinction risk.

If the question replaced 'extinction' with 'existential' and 'survive' with 'thrive' (retain most value of the future), I would lean towards agree!

zdgroff
10
2
1
36% disagree

The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.

The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives in What We Owe the Future. Understanding, developing direct interventions for, and designing political processes for digital minds seem like plausible candidates. Some work on how to design democratic institutions in the age of AI also seems plausibly tractable enough to compete with extinction risk.

Owen Cotton-Barratt
10
1
2
86% disagree

To some extent I reject the question as not-super-action-guiding (I think that a lot of work people do has impacts on both things).

But taking it at face value, I think that AI x-risk is almost all about increasing the value of futures where "we" survive (even if all the humans die), and deserves most attention. Literal extinction of earth-originating intelligence is mostly a risk from future war, which I do think deserves some real attention, but isn't the main priority right now.

2
Robi Rahman
Hope I'm not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
4
Toby Tremlett🔹
I think Owen is voting correctly Robi - he disagrees that there should be more work on extinction reduction before there is more work on improving the value of the future. (to complicate this, he is understanding working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction).  Apologies if the "agree" "disagree" labelling is unclear - we're thinking of ways to make it more parsable. 
2
Robi Rahman
Ah yes I get it now. Thanks!
2
Toby Tremlett🔹
No worries!
2
Owen Cotton-Barratt
This is right. But to add even more complication: * I think most AI x-risk (in expectation) doesn't lead to human extinction, but a noticeable fraction does * But a lot even of the fraction that leads to human extinction seems to me like it probably doesn't count as "extinction" by the standards of this question, since it still has the earth-originating intelligence which can go out and do stuff in the universe * However, I sort of expect people to naturally count this as "extinction"? Since it wasn't cruxy for my rough overall position, I didn't resolve this last question before voting, although maybe it would get me to tweak my position a little.

Most people have a strong drive to perpetuate humanity. What makes EAs special is that EAs also care about others' suffering. So EAs should focus on trying to make sure the future isn't full of suffering.

I'm a negative utilitarian, meaning I believe reducing suffering is the only thing that truly matters. Other things are important only in so far as they help to reduce suffering. I'm open to debate on this ethical view.

Given this premise:

  • There's nothing bad about extinction itself.
  • There is, however, a lot of bad if we live in a future full of suffering.

Therefore, we should focus on increasing the quality of of life of living beings, rather than simply prolong our existence.

Bella
9
5
0
29% agree

I agreed, but mostly because of my unfortunately-dim view of the tractability of work increasing the value of futures where we survive.

2
Bella
By the way, it looks like there might be some problem with the Forum UI here, as this post has some text suggesting that, since writing this comment, I changed my mind from "29% agree" to "14% agree." But I haven't intentionally changed my vote on the top banner, or changed my mind.
2
Toby Tremlett🔹
That's odd - to me I'm still just seeing your first vote. I'll send this to Will to check out. 
2
Will Howard🔹
Thanks for flagging this @Bella! There was a bug which made it update everyone's comment whenever anyone changed their vote 🤦‍♂️. This is fixed now
2
NickLaing
Me too Bella, see my comment above
WilliamKiely
8
2
1
57% agree

Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my "fairly strongly agree" answer.

2
Toby Tremlett🔹
Thanks - yep I think this is becoming a bit of an issue (it came up a couple times in the symposium as well). I might edit the footnote to clarify - worlds with morally valuable digital minds should be included as a non-extinction scenario, but worlds where an AI which could be called "intelligent life" but isn't conscious/ morally valuable takes over and humans become extinct should count as an extinction scenario. 
2
Owen Cotton-Barratt
Ughh ... baking judgements about what's morally valuable into the question somehow doesn't seem ideal. Like I think it's an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction. Also like: what if you have a world which is like the one you describe as an extinction scenario, but there's a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario? I'd kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.

I don't think it's valuable to ensure future moral patients exist for their own sake, and extinction risk reduction only really seems to expectably benefit humans who would otherwise die in an extinction event, who would be in the billions. An astronomical number of future moral patients could have welfare at stake if we don't go extinct, so I'd prioritize them on the basis of their numbers.

See this comment and thread.

2
MichaelStJules
When I edited this comment, it removed my vote percentage from it.

I genuinely have no idea. 

Tax Geek
2
0
0
86% disagree

I think humans will go extinct at some point, so reducing extinction risk just kicks the can down the road. 

On a selfish level, I don't want humans to go extinct anytime soon. But on an impartial level, I don't care really care whether humans go extinct, say, 500 years from now vs 600. I don't subscribe to the Total View of population ethics so I don't place moral value on the "possible lives that could have existed" in those extra 100 years. 

2
David Mathers🔸
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things): -Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.  -A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)  -A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.  This isn't just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.  Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/culture/humanity as whole, although it is true that this won't get you the view that preventing extinction is astronomically valuable. 
1
Tax Geek
Thanks for the considered response. You're right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written "I don't subscribe to the idea that adding happy people is intrinsically good in itself" as I think that better reflects my position — I subscribe to the Person-Affecting View (PAV).  The reason I prefer the PAV is not because of the repugnant conclusion (which I don't actually find "repugnant") but more the problem of existence comparativism — I don't think that, for a given person, existing can be better or worse than not existing.  Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of "humanity" as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I don't think reducing extinction risk will beat out increasing the value of futures where we survive.  To be clear, I don't have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I don't have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I don't think is likely), I wouldn't see that as a big moral catastrophe that requires intervention. 
2
David Mathers🔸
"I don't think that, for a given person, existing can be better or worse than not existing. "  Presumably even given this, you wouldn't create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing can't be worse than not existing, then why can't it be a good thing to create happy people, even though existing can't be better than not existing? 
1
Tax Geek
No, but I think the reason many people including myself have a strong procreation asymmetry because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I agree that (2) is good.  If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy, for whatever reason (e.g. it ends up being adopted by someone who ends up being a great parent).   In contrast, it is indeed possible to create a child who would be spend their entire life in agony. In fact, if I created a child and did nothing more, that child's life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.  Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each person's intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can help—and where it can't. 
Charlie Harrison
7
6
0
64% disagree

If we treat digital minds like current animal livestock, the expected value of the future could be really bad. 

Mjreard
6
1
0
21% agree

I think people overrate how predictable the effect of our actions on the future will be (even though they rate it very low in absolute terms); extinction seem like one of the very few (only?) things that seems like its effects will endure throughout a big part of the future. Still buy the theory that 0-1% of possible value is equally valuable to 98-99%; just about tractability

Derek Shiller
6
0
2
93% disagree

People in general, and not just longtermist altruists, have reason to be concerned with extinction. It may turn out not to be a problem or not be solvable and so the marginal impact seems questionable here. In contrast, few people are thinking about how to navigate our way to a worthwhile future. There are many places where thoughtful people might influence decisions that effectively lock us into a trajectory.

2
OllieBase
This might be true on the kinds of scales EAs are thinking about (potentially enourmous value, long time horizons) but is it not the case that many people want to steer humanity in a better direction? E.g. the Left, environmentalists, libertarians, ... ~all political movements? I worry EAs think of this as some unique and obscure thing to think about, when it isn't. (on the other hand, people neglect small probabilities of disastrous outcomes)
2
Derek Shiller
Lots of people think about how to improve the future in very traditional ways. Assuming the world keeps operating under the laws it has been for the past 50 years, how do we steer it in a better direction? I suppose I was thinking of this in terms of taking radical changes from technology development seriously, but not in the sense of long timelines or weird sources of value. Far fewer people are thinking about how to navigate a time when AGI becomes commonplace than are thinking about how to get to that place, even though there might not be a huge window of time between them.
quinn
3
1
0
50% ➔ 57% agree

I roughly feel more comfortable passing the responsibility onto wiser successors. I still like the "positive vs negative longtermism" framework, I think positive longtermism (increasing the value of futures where we survive) risks value lock-in too much. Negative longtermism is a clear cut responsibility with no real downside unless you're presented with a really tortured example about spending currently existing lives to buy future lives or something. 

Buck
5
0
1
50% agree

I think increasing the value of good futures is probably higher importance, but much less tractable

3
Maxime Riché 🔸
I am curious about the lower tractability. Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future?  If yes, is that because we are not able to change them, or because we don't know which moral values to input, or something else?  In the second case, what about inputting the goal of figuring out which goals to pursue ("long reflection")?
Pascal Costa
*2
1
0
43% ➔ 7% disagree

Intuitively, I don't see the point to perpetuate humanity if it's with life full of suffering. 
 After reading arguments on the other side, feel much more uncertain. 
Indeed, it will be hard to fix value issues without any humans (based on the fact that we are the only species that think about moral issues)

1
JoA🔸
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he's far from alone in having argued this, but this is the first post that comes to mind).

Two major reasons/ considerations:
1- I'm unconvinced of the tractability of non-extinction-risk reducing longtermist interventions. 
2- Perhaps this is self-defeating - but I feel uncomfortable substantively shaping the future in ways that aren't merely making sure it exists. Visions of the future that I would have found un-objectionable a century ago would probably seem bad to me today. In short - this consideration is basically "moral uncertainty". I think extinction-risk reduction is, though not recommended on every moral framework, at least recommended on most. I haven't seen other ideas for shaping the future which are as widely recommended. 

4
Maxime Riché 🔸
I am curious about (1)  Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future?  If yes, is that because we are not able to change them, or because we don't know which moral values to input, or something else?  In the second case, what about inputting the goal of figuring out which goals to pursue ("long reflection")?
7
Toby Tremlett🔹
I think yes and for all the reasons. I'm a bit sceptical that we can change the values ASIs will have - we don't understand present models that well, and there are good reasons not to treat how a model outputs text as representative of its goals (it could be hallucinating, it could be deceptive, it's outputs might just not be isomorphic to a reward structure).  And even if we could, I don't know of any non-controversial value to instill in the ASI, that isn't just included in basic attempts to control the ASI (which I'd be doing mostly for extinction related reasons). 
Nicholas Decker
1
0
0
21% ➔ 29% disagree

Tepidly disagree — I think the technological developments, like AI, which would raise the spectre of extinction are far more contingent than we would like to believe.

Nonexistence is preferable to intense suffering, and I think there are enough S-risks associated with the array of possible futures ahead of us that we should prioritize reducing S-risks over X-risks, except when reducing X-risks is instrumental to reducing S-risks. So to be specific, I would only agree with this to the extent that "value" == lack of suffering -- I do not think we should build for the utopia that might not come to pass because we wipe ourselves out first, just that it is vastly more important to prevent dystopia 

Maxtandy
1
0
0
79% disagree

Essentially the Brian Kateman view: civilisation's valence seems massively negative due to farmed animal suffering. This is only getting worse despite people being able to change right now. There's a very significant chance that people will continue to prefer animal meat, even if cultured meat is competitive on price etc. "Astronomical suffering" is a real concern.

Leo Wu
1
0
0
14% disagree

I have to push back against the premise, as the dichotomoy seems a bit too forced. There are different ways to ensure survival, such as billionaires building survival shelters and hoarding resources vs. collective international efforts to solve our biggest problems. Working on better futures also usually involves creating more resilient institutions that will be better suited towards preventing extinction; we don't just magically pop into a future.

We can adjust the risk per unit of reward or the reward per unit of risk. 

In the absence of credible, near-term, high-likelihood existential risks and in the absence of being path-locked on an existential trajectory, I would rather adjust the reward per unit of risk. 

I also suspect that the most desirable paths to improving the value of futures where we survive will come with a host of advancements that allow us to more effectively combat risks anyway. Yes, I'm sure there are some really dumb ways to improve the value of futures, such that we're ... (read more)

OscarD🔸
4
1
0
29% disagree

I think there are a lot of thorny definitional issues here that make this set of issues not boil down that nicely to a 1D spectrum. But overall extinction prevention will likely have a far broader coalition supporting it, while making the future large and amazing is far less popular since most people aren't very ambitious with respect to spreading flourishing through the universe, but I tentatively am.

Aidan Alexander
5
3
2
79% disagree

Far from convinced that continued existence at currently likely wellbeing levels is a good thing

Linch
3
0
0
29% agree

mostly because of tractability than any other reason

John Salter
4
1
8
29% ➔ 71% disagree

The far future, on our current trajectory, seems net negative on average. Reducing extinction risk just multiplies its negative EV. 

4
Mo Putera
Have you written elsewhere on why you think the far future seems net negative on average on our current trajectory?
1
Greg_Colbourn ⏸️
On this view, why not work to increase extinction risk? (It would be odd if doing nothing was the best course of action when the stakes are so high either way.)
2
John Salter
It'd be hard to do without breaking a lot of good heuristics (i.e. don't lie, don't kill people)
4
MichaelStJules
You could defend the idea that extinction risk reduction is net negative or highly ambiguous in value, even just within EA and adjacent communities. Convincing people to not work on things that are net negative by your lights seems not to break good heuristics or norms.
JoA🔸
4
3
0
43% disagree

This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete. 

It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood... (read more)

Darren McKee
2
0
0
0% agree

Question seems like a false dichotomy.
For example, Democracy promotion sure seems super important right now and that would help both described causes (and it isn't clear which would be helped more). 

MatthewDahlhausen
2
0
0
64% disagree

The salient question for me is how much does reducing extinction risk change the long run experience of moral patients? One argument is that meaningfully reducing risk would require substantial coordination, and that coordination is likely to result in better worlds. I think it is as or more likely that reducing extinction risk can result in some worlds where most moral patients are used as means without regard to their suffering.

I think an AI aligned to roughly to the output of all current human coordination would be net-negative. I would shift to thinkin... (read more)

I think that without knowing people's assessment of extinction risk (e.g. chance of extinction over the next 5, 10, 20, 50, 100 years)[1], the answers here don't provide a lot of information value. 

I think a lot of people on the disagree side would change their mind if they believed (as I do) that there is a >50% chance of extinction in the next 5 years (absent further intervention).

Would be good if there was a short survey to establish such background assumptions to people's votes.

  1. ^

    And their assessment of the chance that AI successors will be mora

... (read more)

I'm optimistic about the very best value-increasing research/interventions. But in terms of what would actually be done at the margin, most work that people would do for "value-increasing" reasons would be confused/doomed, I expect (and this is less true for AI safety).

niplav
3
1
0
71% ➔ 64% agree

Under moral uncertainty, many moral perspectives care much more about averting downsides than producing upsides.

Additionally, tractability is probably higher for extinction-level threats, since they are "absorptive"; decreasing the chance we end up in one gives humanity and their descendants ability to do whatever they figure out is best.

Finally, there is a meaningful sense in which working on improving the future is plagued by questions about moral progress and lock-in of values, and my intuition is that most interventions that take moral progress serious... (read more)

Matthew_Barnett
*3
1
1
71% disagree

In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.

0
Pablo
Extremely unlikely to happen... when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
2
Matthew_Barnett
In my comment I later specified "in [the] next century" though it's quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don't currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards. (I just edited the original comment to make this slightly clearer.)
4
Pablo
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about "premature" extinction). On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed "over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?
2
Matthew_Barnett
I tentatively agree with your statement that, That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I'd put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as "extremely low" or "very low" in this context). To justify this statement, I would highlight several key factors: 1. Throughout hundreds of millions of years, complex life has demonstrated remarkable resilience. Since the first vertebrates colonized land during the late Devonian period (approximately 375–360 million years ago), no extinction event has ever eradicated all species capable of complex cognition. Even after the most catastrophic mass extinctions, such as the end-Permian extinction and the K-Pg extinction, vertebrates rebounded. Not only did they recover, but they also surpassed their previous levels of ecological dominance and cognitive complexity, as seen in the increasing brain size and adaptability of various species over time. 2. Unlike non-intelligent organisms, intelligent life—starting with humans—possesses advanced planning abilities and an exceptional capacity to adapt to changing environments. Humans have successfully settled in nearly every climate and terrestrial habitat on Earth, from tropical jungles to arid deserts and even Antarctica. This extreme adaptability suggests that intelligent life is less vulnerable to complete extinction compared to other complex life forms. 3. As human civilization has advanced, our species has become increasingly robust against most types of extinction events rather than more fragile. Technological progress has expanded our ability to mitigate threats, whether they come from natural disasters or disease. Our massive global population further reduces the likelihood that any single event could exterminate every last human, while our growing capacity to detect and neutralize threats makes us
[anonymous]3
0
1
79% disagree

It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.

jboscw
3
1
0
93% agree

Very worried about AI risk, think short timelines are plausible

Parker_Whitfill
2
1
0
29% disagree

Roughly buy that there is more "alpha" in making the future better because most people are not longtermist but most people do want to avoid extinction. 

Matrice Jacobine
3
1
0
1
36% ➔ 29% disagree

Recent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn't conducive to robust multilateral governance).

Tejas Subramaniam
3
1
0
21% ➔ 7% agree

I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain ... (read more)

I think human extinction (from ASI) is highly likely to happen, and soon, unless we stop it from being built[1]

See my comments in the Symposium for further discussion

  1. ^

    And that the ASI that wipes us out won't matter morally (to address footnote 2 on the statement)

By reducing the chances of our extinction, we could solve other threats, such as virus control, nuclear weapons, animal preservation & welfare, and looking into more sustainable ways of living, that have less impact on our habitat. We need to take care of the home we have today.

Warren H
1
0
0
43% disagree

Survival doesn't in and of itself amount to a meaningful life. Generally, there ought to be a sweet spot between the two. If taken to the extreme, all resources that do not contribute to the subsistence of life is wasted. I don't agree that we ought to live that way, and I think most people would support that conclusion.

Andreas Jessen🔸
1
0
0
7% ➔ 29% disagree

This is just an intuition of mine, and not thoroughly researched, but it seems to me that if we consider all sentient beings, there are many possible futures in which the average well-being would be below neutral, and some of them, especially for non-human animals, would be quite easily preventable. This leads me to believe that marginal resources are currently better invested in preventing future suffering than in reducing the risk of extinction.

mlovic
1
0
0
64% agree

I think he have relatively more leverage over probability of near-term extinction than the value of the entire post-counterfactual-extinction future

poat
1
0
0
57% disagree

p(doom in 20 years) ~= 0.005

Alex Catalán Flores
1
0
0
50% disagree

I'm not convinced that a marginal resource, especilly a funder, can move the needle on existential risk to a degree greater than or equivalent to the positive change that same resource would have on reducing suffering today. 

cb
1
0
0
57% agree

Tractability + something-like-epistemic-humility feel like cruxes for me, I'm surprised they haven't been discussed much; preventing extinction is good by most lights, specific interventions to improve the future are much less clearly good, and I feel much more confused about what would have lasting effects.

trevor1
1
0
0
0% agree

I haven't read enough about this yet, and I need to shrink the gap between me and others who've read a lot about this by, like, 3 OOMs or something.

bn__
1
0
0
29% disagree

Preserving option-value probably isn't enough, because we may fail to consider better futures if we aren't actively thinking about how to realise them.

I really don't have a strong view, but I find myself sympathetic to the idea that the world is not great and is getting worse (if you consider non-human animals).

Broadly Agree

Although I might have misunderstood and missed the point of this entire debate, so correct me if that is the case

I just don't believe  changing the future trajectory is tractable in say 50-100 years from now areas like politics, economics, AI welfare etc. I think its a pipe dream. We cannot predict technological, political and economic changes even in the medium term future. These changes may well quickly render our current efforts meaningless in 10-20 years. I think the effect of work we do now which is future focused diminishes in value... (read more)

Robi Rahman
2
0
0
93% agree

On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.

(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)

Manuel Allgaier
2
0
0
43% disagree

Depends, if x-risk is small (<5%) and if we expect outsized impact on the future (preventing negative value lock-ins), then the latter seems more important. I'm very unsure about both. 

OllieBase
2
0
0
36% agree

It seems plausible to me we might be approaching a "time of perils' where total x-risk is unacceptably high and will continue to be as we develop powerful AI systems, but might decrease later since we can use AI systems to tackle x-risk (though that seems hard and risky in its own myriad ways).

Broadly think we should still prioritise avoiding catastrophes in this phase, and bet on being able to steer later but low confidence.

Ozzie Gooen
2
0
0
29% agree

I have mixed feelings here. But one major practical worry I have about "increasing the value of futures" is that a lot of that looks fairly zero-sum to me. And I'm scared of getting other communities to think this way. 

If we can capture 5% more of the universe for utilitarian aims, for example, that's 5% less from others. 

I think it makes sense for a lot of this to be studied in private, but am less sure about highly public work.

finm
2
0
0
64% disagree

Partly this is because I think “extinction” as defined here is very unlikely (<<1%) to happen this century, which upper bounds the scale of the area. I think most “existential risk” work is not squarely targeted at avoiding literal extinction of all Earth-originating life.

Angelina Li
2
0
0
14% disagree

I feel extremely unsure about this one. I'm voting slightly against purely from the perspective of, "wow, there are projects in that direction that feel super neglected".

Nathan Young
2
0
0
21% agree

I dunno, by how much? Seems contingent on lots of factors. 

If we avoid extinction, plenty of people will have the time to take care of humanity's future. I'll leave it to them. Both topic have a lot of common ground anyway, like "not messing up with the biosphere" or "keeping control of ASI"

1
Maxime Riché 🔸
Intelligent life extinction could be prevented by creating a misaligned AI locking-in bad moral values, no? Maybe see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ
Tom Gardiner
2
2
1
57% disagree

The long-running future seems like it could well be unacceptably awful. From the perspective of a battery hen, it would seem much better that it's distant ancestors were pushed out of an ecological niche before humanity domesticated them. Throwing all our effort into X-risk mitigation without really tackling S-risks in a world of increasing lock-in across domains seems deeply unwise to me. 

I find it very difficult to determine whether the future will be net-negative or net-positive (when considering humans, factory-farmed animals, wild animals, and possibly artificial sentience). 
This makes it very hard to know whether work on extinction reduction is likely to be positive or not.
I prefer to work on things that aim to move the sign towards "net-positive".

JackM
2
0
1
36% disagree

This is a question I could easily change my mind on.

The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.

If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer - I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which ... (read more)

1
Dylan Richardson
I agree about digital minds dominating far future calculations; but I don't think your expectation that it is equally likely that we create suffering minds is reasonable. Why should we think suffering to be specially likely? "Using" them means suffering? Why? Wouldn't maximal usefulness entail, if any experience at all, one of utter bliss at being useful?  Also, the pleasure/suffering asymmetry is certainly a thing in humans (and I assume other animals), but pleasure does dominate, at least moment-to-moment. Insofar as wild animal welfare is plausibly net-negative, it's because of end-of-life moments and parasitism, which I don't see a digital analog for. So we have a biological anchor that should incline us toward the view utility dominates.  Moral circle expanding should also update us slightly against "reducing extinction risk being close to zero". And maybe, by sheer accident, we create digital minds that are absolutely ecstatic! 
2
JackM
Well the closest analogue we have today is factory farmed animals. We use them in a way that causes tremendous suffering. We don't really mean to cause the suffering, but it's a by product of how we use them. And another, perhaps even better, analogue is slavery. Maybe we'll end up essentially enslaving digital minds because it's useful to do so - if we were to give them too much freedom they wouldn't as effectively do what we want them to do. Creating digital minds just so that they can live good lives is a possibility, but I'd imagine if you would ask someone on the street if we should do this, they'd look at you like you were crazy. Again, I'm not sure how things will pan out, and I would welcome strong arguments that suffering is unlikely, but it's something that does worry me.
1
Dylan Richardson
That's true - but the difference is that both animals and slaves are sub-optimal; even our modern, highly domesticated food stock doesn't thrive in dense factory farm conditions, nor willingly walks into the abattoir. And an ideal slave wouldn't really be a slave, but a willing and dedicated automaton. By contrast, we are discussing optimized machines - less optimized would mean less work being done, more resource use and less corporate profit. So we should expect more ideal digital servants (if we have them at all). A need to "enslave" them suggests that they are flawed in some way. The dictates of evolution and nature need not apply here.  To be clear, I'm not entirely dismissing the possibility of tormented digital minds, just the notion that they are equally plausible.
2
JackM
You’re basically saying happier machines will be more productive and so we are likely to make them to be happy? Firstly we don’t necessarily understand consciousness enough to know if we are making them happy, or even if they are conscious. Also, I’m not so sure if happier means more productive. More computing power, better algorithms and more data will mean more productive. I’m open to hearing arguments why this would also mean the machine is more likely to be happy.  Maybe the causality goes the other way - more productive means more happy. If machines achieve their goals they get more satisfaction. Then maybe happiness just depends on how easy the goals we give it is. If we set AI on an intractable problem and it never fulfills it maybe it will suffer. But if AIs are constantly achieving things they will be happy.  I’m not saying you’re wrong just that it seems there’s a lot we still don’t know and the link between optimization and happiness isn’t straightforward to me.
Seth Herd
1
0
1
43% agree

We do not have adequate help with AGI x-risk, and the societal issues demand many skillsets that alignment workers typically lack. Surviving AGI and avoiding s-risk far outweigh all other concerns by any reasonable utilitarian logic. 

Arturo Macias
2
1
0
79% agree

Intelligence is the only chance of some redemption to the massive suffering probably associated to the emergence of consciousness. 

This is the age of danger, we are the first species on Earth that has figured out morality, so we shall survive at allmost all cost. 

Alistair Stewart
1
1
1
93% disagree

Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.

3
Greg_Colbourn ⏸️
Wait, how does your 93% disagree tie in with your support for PauseAI?
3
Alistair Stewart
* I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI * However, couching demands for an AGI moratorium in terms of "reducing x-risk" rather than "reducing suffering" seems * More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about * More effective in communicating catastrophic AI risk to the public
Trym Braathen🔸
1
1
1
29% disagree

What is the point of securing a future for humanity if that future is net-negative?

2
Greg_Colbourn ⏸️
Even if it seems net-negative now, we don't know that it always will be (and we can work to make it net-positive!). Also, on this view, why not work to increase our chance of extinction?
Nicholas Kees Dupuis
1
1
0
71% disagree

Survival feels like a very low bar to me. 

Survival could mean the permanent perpetuation of extreme suffering, human disempowerment, or any number of losses of our civilization's potential.

Tom Billington
1
0
1
43% disagree

Have some level of scepticism that the total "util count" from life on earth will be net-positive. I'm also in general wary of impact that is too long-term speculative. 

It seems to me that extinction is the ultimate form of lock-in, while surviving provides more opportunities to increase the value of the future. This moves me very far toward Agree. It seems possible, however, that there could be future that rely on actions today that are so much better than alternatives that it could be worth rolling worse dice, or futures so bad that extinction could be preferable, so this brings me back a bit from very high Agree. 

On the margin: I think we are not currently well-equipped to determine whether actions are or aren't i... (read more)

Stephen McAleese
1
0
0
43% agree

I think avoiding existential risk is the most important thing. As long as we can do that and don't have some kind of lock in, then we'll have time to think about and optimize the value of the future.

1
Maxime Riché 🔸
Right. How can we prevent a misaligned AI from locking in bad values?  A misaligned AI surviving takeover counts as "no extinction", see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ
Jim Chapman
1
0
0
14% disagree

I lean toward more work on improving conditions if we survive, but noting that you have to survive to benefit.

JonathanSalter
1
0
0
57% agree

Given cluelessness, it seems waaay less robust and likely to succeed in pulling off a trajectory shift 

We could devote 100% of currently available resources to existential risk reduction, live in austerity, and never be finished ensuring our own survival.  However, if increase the value of futures where we survive, we will develop more and more resources that can then be put to existential risk reduction.  People will be not only happier, but also more capable and skilled, when we create a world where people can thrive rather than just survive.  The highest quality futures are the most robust.  

1
Dylan Richardson
Edit: I misinterpreted the prompt initially (I think you did too); "value of futures where we survive" is meant specifically as "long-run futures, past transformative AI", not just all future including the short term. So digital minds, suffering risk, etc. Pretty confusing! This argument seems pretty representative here; so I'll just note that it is only sensible under two assumptions: 1. Transformative AI isn't coming soon - say, not within ~20 years. & 2. If we are assuming a substantial amount of short-term value is in in-direct preparation for TAI, this excludes many interventions which primarily have immediate returns, with possible long-term returns accruing past the time window. So malaria nets? No. Most animal welfare interventions? No. YIMBYism in Silicon Valley? Maybe yes. High skilled immigration? Maybe yes. Political campaigns? Yes. Of course, we could just say either that we actually aren't all that confident about TAI, or that we are, but immediate welfare concerns simply outweigh marginal preparation or risk reduction.  So either reject something above; or simply go all in on principle toward portfolio diversification. But both give me some pause.
Nithin Ravi🔸
1
0
0
50% disagree

Not sure that existence is net positive in expectation, but improving futures seems more robust!

tylermjohn
1
0
0
36% disagree

I'm compressing two major dimensions into one here:

  • Expected value (I think EV of maxevas is vastly higher than EV of maxipok)
  • Robustness (the case for maxevas is highly empirically theory laden and values dependent)
3
Robi Rahman
What is maxevas? Couldn't find anything relevant by googling.
Cameron Holmes
1
0
0
71% ➔ 50% disagree

AI NotKillEveryoneism is the first order approximation of x-risk work.
 

I think we probably will manage to make enough AI alignment progress to avoid extinction. AI capabilities advancement seems to be on a relatively good path (less foomy) and AI Safety work is starting to make real progress for avoiding the worst outcomes (although a new RL paradigm, illegible/unfaithful CoT could make this more scary).

Yet gradual disempowerment risks seem extremely hard to mitigate, very important and pretty neglected. The AI Alignment/Safety bar for good outcomes c... (read more)

3
Greg_Colbourn ⏸️
What makes you think this? Every technique there is is statistical in nature (due to the nature of the deep learning paradigm), and none are even approaching 3 9s of safety and we need something like 13 9s if we are going to survive more than a few years of ASI. I also don't see how it's less foomy. SWE bench and ML researcher automation are still improving - what happens when the models are drop in replacements for top researchers? What is the eventual end result after total disempowerment? Extinction, right?
1
Cameron Holmes
The gap between weak AGI and strong AGI/ASI timeline predictions seems to have ticked up a bit. It doesn't seem like the intra-token reasoning/capabilities is scaling as hard as I'd previously feared. The models themselves are not getting so scarily capable and agentic in each forward pass, instead we are increasingly eliciting those capabilities/agency in context with the models remaining myopic and largely indifferent.  If the new paradigm holds with a significant focus on scaling inference it seems to both be less aggressive (in terms of scaling intelligence) and more conducive to 'passing' safety. The current paradigm likely places a much lower burden on hard interpretability than I expected ~1 year ago, it feels much more like a verification problem than a full solve. With current rates of interpretability progress (and AI accelerating safety ~inline with capabilities) we could actually be able to verify that a CoT is faithful and legible and that might be ~sufficient. Agreed, I still think there's a reasonable chance that ML research does fall within the set of capabilities that quickly reach superhuman levels and foom is still on the cards, also more RL in general is just inherently quite scary.  The 9s of safety makes sense from a control perspective but I think there's another angle, which is the possibility of a model that is aligned-enough to actually not want to pursue human extinction. Potentially, but I think there's still room for scenarios where humans are broadly disempowered yet not extinct - worlds where we get a passing grade on safety. Where we effectively avoid strongly-agentic systems and achieve sufficient alignment such that human lives are valued, but fall short of the full fidelity necessary for a flourishing future. Still this point has updated me slightly, I've reduced my disagreement. My model looks something like this: There are a bunch of increasingly hard questions on the Alignment Test. We need to get enough of the core que
2
Greg_Colbourn ⏸️
I think the bonus/extra credit questions are part of the main test - if you don't get them right everyone still dies, but maybe a bit more slowly. All the doom flows through the cracks of imperfect alignment/control. And we can asymptote toward, but never reach, existential safety[1]. 1. ^ Of course this applies to all other x-risks too. It's just that ASI x-risk is very near term and acute (in absolute terms, and relative to all the others), and we aren't even starting in earnest with the asymptoting yet (and likely won't if we don't get a Pause).
1
Cameron Holmes
Digital sentience could also dominate this equation.

A higher value future reduces the chances of extinction.  If people value life, they will figure out how to keep it.

2
Greg_Colbourn ⏸️
Does this change if our chance of extinction in the next few years is high? (Which I think it is, from AI).
1
Nick Kautz
I see AI increasing quality of life and extending lifespan.  It's primarily a problem solving tool that exceeds human ability.  Thus many human problems will be solved.  Drugs that cure diseases can be discovered.  It will propose solutions for complex socio-economic issues.  The progression of Humanity has always been driven by problem solving and finding solutions.  AI increases that ability and the rate at which it happens.  With intelligence comes reverence for life and increased awareness, altruism.
2
Greg_Colbourn ⏸️
This isn't always true - see in humans, intelligent sociopaths and mass murderers. It's unlikely to be true with AI either, unless moral realism is true AND the AI discovers the true morality of the universe AND said morality compatible with human flourishing. See: Othogonality Thesis.
Dylan Richardson
*1
0
0
36% ➔ 7% disagree

I misinterpreted the prompt initially. The answer is much more ambiguous to me now, especially due to the overlap between x-risk interventions and "increasing the value of futures where we survive" ones.

I'm not even sure what the later look like to be honest - but I am inclined to think significant value lies in marginal actions now which affect it, even if I'm not sure what they are.

X-risks seem much more "either this is a world in which we go extinct" or a "world with no real extinction risk". It's one or the other, but many interventions hinge on the si... (read more)

KonradKozaczek
1
0
0
14% disagree

It seems even more important to avoid futures full of extreme suffering than to avoid extinction.

JohnSMill
1
0
0
43% agree

I buy into MacAskill's argument that the 20th-21st centuries appear to be an era of heightened existential risk, and that if we can survive the development of nuclear, AI and engineered biology technologies there will be more time in the future to increase the value where we survive. 

Jonas Hallgren
1
0
0
21% agree

First and foremost, I'm low confidence here. 

I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.

What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere) 

Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our ... (read more)

1
Christopher Clay
I've heard this argument before, but I find it un-compelling in its tractability. If we don't go extinct, its likely to be a silent victory; most humans on the planet won't even realise it happened. Individual humans working on X-risk reduction will probably only impact the morals of people around them.
Joseph_Chu
1
0
0
7% disagree

Extinction being bad assumes that our existence in the future is a net positive. There's the possibility for existence to be net negative, in which case extinction is more like a zero point.

On the one hand, negativity bias means that all other things being equal, suffering tends to outweigh equal happiness. On the other hand, there's a kind of progress bias where sentient actors in the world tend to seek happiness and avoid suffering and gradually make the world better.

Thus, if you're at all optimistic that progress is possible, you'd probably assume that ... (read more)

Patrick Hoang
1
0
0
50% ➔ 57% disagree

I think the most likely outcome is not necessarily extinction (I estimate <10% due to AI) but rather an unfulfilled potential. This may be humans simply losing control over the future and becoming mere spectators and AI not being morally significant in some way.

With long timeline and less than 10% probability: Hot take is these are co-dependent - prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are "dimension-reducing" the human race to one single point, we i... (read more)

Scott Smith 🔸
1
1
0
86% agree

We can't work on increasing value if we are dead.

[comment deleted]4
0
0
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f