We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
I don't think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn't come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.
To some extent I reject the question as not-super-action-guiding (I think that a lot of work people do has impacts on both things).
But taking it at face value, I think that AI x-risk is almost all about increasing the value of futures where "we" survive (even if all the humans die), and deserves most attention. Literal extinction of earth-originating intelligence is mostly a risk from future war, which I do think deserves some real attention, but isn't the main priority right now.
The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.
The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives in What We Owe the Future. Understanding, developing direct interventions for, and designing political processes for digital minds seem like plausible candidates. Some work on how to design democratic institutions in the age of AI also seems plausibly tractable enough to compete with extinction risk.
Most people have a strong drive to perpetuate humanity. What makes EAs special is that EAs also care about others' suffering. So EAs should focus on trying to make sure the future isn't full of suffering.
A couple of things:
Even for non-negative utilitarians, I think the marginal value of working on reducing extinction risks on its own is much less than is generally currently believed.
One crux is whether we assume that the future is likely to be high-value as it is. A core claim of those who think working on extinction risks is the most important is that we are very likely to have a high-value future. For many reasons, I am skeptical of this claim. While one may argue that we are making progress in including more beings in our moral circles, we've arguably ... (read more)
I agreed, but mostly because of my unfortunately-dim view of the tractability of work increasing the value of futures where we survive.
Avoiding extinction seems bad to me if we never get our shit together morally-civilizationally, and good otherwise. Conditional on survival, I'm not sure of the likelihood of us getting our shit together to be a force of good in the universe, mostly because I'm uncertain how AGI will play out.
I don't think it's valuable to ensure future moral patients exist for their own sake, and extinction risk reduction only really seems to expectably benefit humans who would otherwise die in an extinction event, who would be in the billions. An astronomical number of future moral patients could have welfare at stake if we don't go extinct, so I'd prioritize them on the basis of their numbers.
See this comment and thread.
Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my "fairly strongly agree" answer.
I genuinely have no idea.
I think "increasing value of futures where we survive" is broad enough that plenty of non-EA stuff like just foreign aid or governance reform stuff generally would count and X-Risk stuff is very specific and niche.
If we treat digital minds like current animal livestock, the expected value of the future could be really bad.
I think increasing the value of good futures is probably higher importance, but much less tractable
Extinction is not so bad in comparison to futures where the world possibly has lots of intense suffering. We should focus resources on preventing suffering and ensuring wellbeing.
I think people overrate how predictable the effect of our actions on the future will be (even though they rate it very low in absolute terms); extinction seem like one of the very few (only?) things that seems like its effects will endure throughout a big part of the future. Still buy the theory that 0-1% of possible value is equally valuable to 98-99%; just about tractability
People in general, and not just longtermist altruists, have reason to be concerned with extinction. It may turn out not to be a problem or not be solvable and so the marginal impact seems questionable here. In contrast, few people are thinking about how to navigate our way to a worthwhile future. There are many places where thoughtful people might influence decisions that effectively lock us into a trajectory.
I'm leaning to disagree because existential risks are a lot broader than extinction risk.
If the question replaced 'extinction' with 'existential' and 'survive' with 'thrive' (retain most value of the future), I would lean towards agree!
I'm a negative utilitarian, meaning I believe reducing suffering is the only thing that truly matters. Other things are important only in so far as they help to reduce suffering. I'm open to debate on this ethical view.
Given this premise:
Therefore, we should focus on increasing the quality of of life of living beings, rather than simply prolong our existence.
Two major reasons/ considerations:
1- I'm unconvinced of the tractability of non-extinction-risk reducing longtermist interventions.
2- Perhaps this is self-defeating - but I feel uncomfortable substantively shaping the future in ways that aren't merely making sure it exists. Visions of the future that I would have found un-objectionable a century ago would probably seem bad to me today. In short - this consideration is basically "moral uncertainty". I think extinction-risk reduction is, though not recommended on every moral framework, at least recommended on most. I haven't seen other ideas for shaping the future which are as widely recommended.
If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for "futures where we survive."
See my post here arguing against that tractability.
Overall I care more about preventing the worst scenarios than promoting the very best. While I am worried about scenarios worse than extinction, and most of my ambivalence comes from the possibility of these, I would count extinction as a scenario that I care about substantially more about than bringing about very positive futures.
While there's less work on improving the longer term future, I also find what work there is not that promising by comparison to the preventing extinction work - and the longer we survive, the more likely I find it that we are ab... (read more)
Far from convinced that continued existence at currently likely wellbeing levels is a good thing
36%➔ 29% disagreeRecent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn't conducive to robust multilateral governance).
29%➔ 71% disagreeThe far future, on our current trajectory, seems net negative on average. Reducing extinction risk just multiplies its negative EV.
More tractable, necessary precondition
I am skeptical being able to reduce at all our chance of extinction but I am confident we can reduce suffering while we are here
I think humans will go extinct at some point, so reducing extinction risk just kicks the can down the road.
On a selfish level, I don't want humans to go extinct anytime soon. But on an impartial level, I don't care really care whether humans go extinct, say, 500 years from now vs 600. I don't subscribe to the Total View of population ethics so I don't place moral value on the "possible lives that could have existed" in those extra 100 years.
I think there are a lot of thorny definitional issues here that make this set of issues not boil down that nicely to a 1D spectrum. But overall extinction prevention will likely have a far broader coalition supporting it, while making the future large and amazing is far less popular since most people aren't very ambitious with respect to spreading flourishing through the universe, but I tentatively am.
This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete.
It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood... (read more)
50%➔ 57% agreeI roughly feel more comfortable passing the responsibility onto wiser successors. I still like the "positive vs negative longtermism" framework, I think positive longtermism (increasing the value of futures where we survive) risks value lock-in too much. Negative longtermism is a clear cut responsibility with no real downside unless you're presented with a really tortured example about spending currently existing lives to buy future lives or something.
In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.
mostly because of tractability than any other reason
It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.
Very worried about AI risk, think short timelines are plausible
43%➔ 7% disagreeIntuitively, I don't see the point to perpetuate humanity if it's with life full of suffering.
After reading arguments on the other side, feel much more uncertain.
Indeed, it will be hard to fix value issues without any humans (based on the fact that we are the only species that think about moral issues)
21%➔ 7% agreeI think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain ... (read more)
Nonexistence is preferable to intense suffering, and I think there are enough S-risks associated with the array of possible futures ahead of us that we should prioritize reducing S-risks over X-risks, except when reducing X-risks is instrumental to reducing S-risks. So to be specific, I would only agree with this to the extent that "value" == lack of suffering -- I do not think we should build for the utopia that might not come to pass because we wipe ourselves out first, just that it is vastly more important to prevent dystopia
I'm optimistic about the very best value-increasing research/interventions. But in terms of what would actually be done at the margin, most work that people would do for "value-increasing" reasons would be confused/doomed, I expect (and this is less true for AI safety).
Survival feels like a very low bar to me.
Survival could mean the permanent perpetuation of extreme suffering, human disempowerment, or any number of losses of our civilization's potential.
71%➔ 64% agreeUnder moral uncertainty, many moral perspectives care much more about averting downsides than producing upsides.
Additionally, tractability is probably higher for extinction-level threats, since they are "absorptive"; decreasing the chance we end up in one gives humanity and their descendants ability to do whatever they figure out is best.
Finally, there is a meaningful sense in which working on improving the future is plagued by questions about moral progress and lock-in of values, and my intuition is that most interventions that take moral progress serious... (read more)
29%➔ 7% disagreeRoughly buy that there is more "alpha" in making the future better because most people are not longtermist but most people do want to avoid extinction.
Averse to uncertainty (but oh well can that be defined)
Yes, because it seems like extinction or near-extinction is a major possibility.
If we go extinct, it doesn't matter how much value we get. We don't exist to appreciate it.
If we don't go extinct, it probably means we have enough "value" (we survived, it means we had and have food and shelter) and probably we can have math proofs how to make AI agents safe. Now after the main extinction event (probably AI agentic explosion) is in the past, we can work on increasing the value of futures.
I'm anti-natalist and don't believe that human extinction is necessarily a 'bad' thing. I believe it's more important to make people happier than to make more happy people.
Reducing x risk is much less tractable than EAs think
Naively, I am assuming that extinction risks are relatively high (1/6 was Ord's take in The Precipice?), and if it happens, than there's 0 futures with 0 value.
I think there are already tons of really smart people working full time on making the futures where we survive more valuable. I think the likeliest existential and catastrophic risks are relatively neglected
I prefer work on extinction risk. Preventing near-term extinction risks seems more tractable than improving the value of the long-term future in other ways since short-term consequences are generally more predictable than long-term consequences.
The world is pretty good, and the future can be better, if we get the chance to make it so.
The big reason I lean towards disagreeing nowadays is coming to the belief that I expect the AI control/alignment problem to be much less neglected and important to solve, and more generally I've come to doubt the assumption that worlds in which we survive are worlds in which we achieve very large value (under my own value set), such that reducing existential risk is automatically good.
Our lightcone is an enormous endowment. We get to have a lot of computation, in a universe with simple physics. What these resources are spent on matters a lot.
If we get AI right (create a CEV-aligned ASI), we get most of the utility out of these resources automatically (almost tautologically, see CEV: to the extent after considering all the arguments and reflecting we think we're ought to value something, this is what CEV points to as an optimization target). If it takes us a long time to get AI right, we lose a literal galaxy of resources every year, but... (read more)
neglectedness of the latter outweighing increased tractability of the former
X-risk reduction (especially alinement) is highly neglected and it's less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact.
I'm working on trying to describe the environmental future we want to have, so I'll have written up a better answer soon.™️
We know our past and how species have gone extinct. We see the present. It's the future that is uncertain. If we keep looking forward and forget the pot of burning hot tea about to spill the whole table, we will get burned, and badly too. and have a whole ass burnt up body for the future
alien counterfactuals
I think it's widely accepted on the forum that current society is net QUALY/util negative. It's also not entirely clear that this value will become net positive, and I think many of the methods for making society net-positive would also reduce existential risk (encouraging greater compassion, which also reduces the risk of wars/other dangerous competitive dynamics).
it's easy to be wrong about what futures are more valuable, but it's pretty clear that extinction closes off all future options of creating value.
It seems even more important to avoid futures full of extreme suffering than to avoid extinction.
morally ambivalent about extinction, though reducing chances of extinction could plausibly also have positive flowthrough for S-Risks.
I really don't have a strong view, but I find myself sympathetic to the idea that the world is not great and is getting worse (if you consider non-human animals).
Question seems like a false dichotomy.
For example, Democracy promotion sure seems super important right now and that would help both described causes (and it isn't clear which would be helped more).
I think human extinction (from ASI) is highly likely to happen, and soon, unless we stop it from being built[1]
See my comments in the Symposium for further discussion
And that the ASI that wipes us out won't matter morally (to address footnote 2 on the statement)
Broadly Agree
Although I might have misunderstood and missed the point of this entire debate, so correct me if that is the case
I just don't believe changing the future trajectory is tractable in say 50-100 years from now areas like politics, economics, AI welfare etc. I think its a pipe dream. We cannot predict technological, political and economic changes even in the medium term future. These changes may well quickly render our current efforts meaningless in 10-20 years. I think the effect of work we do now which is future focused diminishes in value... (read more)
On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.
(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)
Depends, if x-risk is small (<5%) and if we expect outsized impact on the future (preventing negative value lock-ins), then the latter seems more important. I'm very unsure about both.
It seems plausible to me we might be approaching a "time of perils' where total x-risk is unacceptably high and will continue to be as we develop powerful AI systems, but might decrease later since we can use AI systems to tackle x-risk (though that seems hard and risky in its own myriad ways).
Broadly think we should still prioritise avoiding catastrophes in this phase, and bet on being able to steer later but low confidence.
I have mixed feelings here. But one major practical worry I have about "increasing the value of futures" is that a lot of that looks fairly zero-sum to me. And I'm scared of getting other communities to think this way.
If we can capture 5% more of the universe for utilitarian aims, for example, that's 5% less from others.
I think it makes sense for a lot of this to be studied in private, but am less sure about highly public work.
The salient question for me is how much does reducing extinction risk change the long run experience of moral patients? One argument is that meaningfully reducing risk would require substantial coordination, and that coordination is likely to result in better worlds. I think it is as or more likely that reducing extinction risk can result in some worlds where most moral patients are used as means without regard to their suffering.
I think an AI aligned to roughly to the output of all current human coordination would be net-negative. I would shift to thinkin... (read more)
Partly this is because I think “extinction” as defined here is very unlikely (<<1%) to happen this century, which upper bounds the scale of the area. I think most “existential risk” work is not squarely targeted at avoiding literal extinction of all Earth-originating life.
I feel extremely unsure about this one. I'm voting slightly against purely from the perspective of, "wow, there are projects in that direction that feel super neglected".
I dunno, by how much? Seems contingent on lots of factors.
I think that without knowing people's assessment of extinction risk (e.g. chance of extinction over the next 5, 10, 20, 50, 100 years)[1], the answers here don't provide a lot of information value.
I think a lot of people on the disagree side would change their mind if they believed (as I do) that there is a >50% chance of extinction in the next 5 years (absent further intervention).
Would be good if there was a short survey to establish such background assumptions to people's votes.
- ^
... (read more)And their assessment of the chance that AI successors will be mora
The long-running future seems like it could well be unacceptably awful. From the perspective of a battery hen, it would seem much better that it's distant ancestors were pushed out of an ecological niche before humanity domesticated them. Throwing all our effort into X-risk mitigation without really tackling S-risks in a world of increasing lock-in across domains seems deeply unwise to me.
I find it very difficult to determine whether the future will be net-negative or net-positive (when considering humans, factory-farmed animals, wild animals, and possibly artificial sentience).
This makes it very hard to know whether work on extinction reduction is likely to be positive or not.
I prefer to work on things that aim to move the sign towards "net-positive".
This is a question I could easily change my mind on.
The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.
If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer - I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which ... (read more)
Id rather have a shitty human society then none at all. A little is better then a lot of progres.
If we avoid extinction, plenty of people will have the time to take care of humanity's future. I'll leave it to them. Both topic have a lot of common ground anyway, like "not messing up with the biosphere" or "keeping control of ASI"
Intelligence is the only chance of some redemption to the massive suffering probably associated to the emergence of consciousness.
This is the age of danger, we are the first species on Earth that has figured out morality, so we shall survive at allmost all cost.
What is the point of securing a future for humanity if that future is net-negative?
working on valuable futures is only worth it if we are not extinct
It seems to me that extinction is the ultimate form of lock-in, while surviving provides more opportunities to increase the value of the future. This moves me very far toward Agree. It seems possible, however, that there could be future that rely on actions today that are so much better than alternatives that it could be worth rolling worse dice, or futures so bad that extinction could be preferable, so this brings me back a bit from very high Agree.
On the margin: I think we are not currently well-equipped to determine whether actions are or aren't i... (read more)
By reducing the chances of our extinction, we could solve other threats, such as virus control, nuclear weapons, animal preservation & welfare, and looking into more sustainable ways of living, that have less impact on our habitat. We need to take care of the home we have today.
I think avoiding existential risk is the most important thing. As long as we can do that and don't have some kind of lock in, then we'll have time to think about and optimize the value of the future.
Survival doesn't in and of itself amount to a meaningful life. Generally, there ought to be a sweet spot between the two. If taken to the extreme, all resources that do not contribute to the subsistence of life is wasted. I don't agree that we ought to live that way, and I think most people would support that conclusion.
7%➔ 29% disagreeThis is just an intuition of mine, and not thoroughly researched, but it seems to me that if we consider all sentient beings, there are many possible futures in which the average well-being would be below neutral, and some of them, especially for non-human animals, would be quite easily preventable. This leads me to believe that marginal resources are currently better invested in preventing future suffering than in reducing the risk of extinction.
I lean toward more work on improving conditions if we survive, but noting that you have to survive to benefit.
21%➔ 29% disagreeTepidly disagree — I think the technological developments, like AI, which would raise the spectre of extinction are far more contingent than we would like to believe.
Given cluelessness, it seems waaay less robust and likely to succeed in pulling off a trajectory shift
We could devote 100% of currently available resources to existential risk reduction, live in austerity, and never be finished ensuring our own survival. However, if increase the value of futures where we survive, we will develop more and more resources that can then be put to existential risk reduction. People will be not only happier, but also more capable and skilled, when we create a world where people can thrive rather than just survive. The highest quality futures are the most robust.
p(doom in 20 years) ~= 0.005
I think he have relatively more leverage over probability of near-term extinction than the value of the entire post-counterfactual-extinction future
Not sure that existence is net positive in expectation, but improving futures seems more robust!
I'm not convinced that a marginal resource, especilly a funder, can move the needle on existential risk to a degree greater than or equivalent to the positive change that same resource would have on reducing suffering today.
I'm compressing two major dimensions into one here:
Tractability + something-like-epistemic-humility feel like cruxes for me, I'm surprised they haven't been discussed much; preventing extinction is good by most lights, specific interventions to improve the future are much less clearly good, and I feel much more confused about what would have lasting effects.
Essentially the Brian Kateman view: civilisation's valence seems massively negative due to farmed animal suffering. This is only getting worse despite people being able to change right now. There's a very significant chance that people will continue to prefer animal meat, even if cultured meat is competitive on price etc. "Astronomical suffering" is a real concern.
71%➔ 50% disagreeAI NotKillEveryoneism is the first order approximation of x-risk work.
I think we probably will manage to make enough AI alignment progress to avoid extinction. AI capabilities advancement seems to be on a relatively good path (less foomy) and AI Safety work is starting to make real progress for avoiding the worst outcomes (although a new RL paradigm, illegible/unfaithful CoT could make this more scary).
Yet gradual disempowerment risks seem extremely hard to mitigate, very important and pretty neglected. The AI Alignment/Safety bar for good outcomes c... (read more)
A higher value future reduces the chances of extinction. If people value life, they will figure out how to keep it.
36%➔ 7% disagreeI misinterpreted the prompt initially. The answer is much more ambiguous to me now, especially due to the overlap between x-risk interventions and "increasing the value of futures where we survive" ones.
I'm not even sure what the later look like to be honest - but I am inclined to think significant value lies in marginal actions now which affect it, even if I'm not sure what they are.
X-risks seem much more "either this is a world in which we go extinct" or a "world with no real extinction risk". It's one or the other, but many interventions hinge on the si... (read more)
I have to push back against the premise, as the dichotomoy seems a bit too forced. There are different ways to ensure survival, such as billionaires building survival shelters and hoarding resources vs. collective international efforts to solve our biggest problems. Working on better futures also usually involves creating more resilient institutions that will be better suited towards preventing extinction; we don't just magically pop into a future.
We can adjust the risk per unit of reward or the reward per unit of risk.
In the absence of credible, near-term, high-likelihood existential risks and in the absence of being path-locked on an existential trajectory, I would rather adjust the reward per unit of risk.
I also suspect that the most desirable paths to improving the value of futures where we survive will come with a host of advancements that allow us to more effectively combat risks anyway. Yes, I'm sure there are some really dumb ways to improve the value of futures, such that we're ... (read more)
I buy into MacAskill's argument that the 20th-21st centuries appear to be an era of heightened existential risk, and that if we can survive the development of nuclear, AI and engineered biology technologies there will be more time in the future to increase the value where we survive.
First and foremost, I'm low confidence here.
I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our ... (read more)
Extinction being bad assumes that our existence in the future is a net positive. There's the possibility for existence to be net negative, in which case extinction is more like a zero point.
On the one hand, negativity bias means that all other things being equal, suffering tends to outweigh equal happiness. On the other hand, there's a kind of progress bias where sentient actors in the world tend to seek happiness and avoid suffering and gradually make the world better.
Thus, if you're at all optimistic that progress is possible, you'd probably assume that ... (read more)
I haven't read enough about this yet, and I need to shrink the gap between me and others who've read a lot about this by, like, 3 OOMs or something.
50%➔ 57% disagreeI think the most likely outcome is not necessarily extinction (I estimate <10% due to AI) but rather an unfulfilled potential. This may be humans simply losing control over the future and becoming mere spectators and AI not being morally significant in some way.
Preserving option-value probably isn't enough, because we may fail to consider better futures if we aren't actively thinking about how to realise them.
With long timeline and less than 10% probability: Hot take is these are co-dependent - prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are "dimension-reducing" the human race to one single point, we i... (read more)
We can't work on increasing value if we are dead.
We do not have adequate help with AGI x-risk, and the societal issues demand many skillsets that alignment workers typically lack. Surviving AGI and avoiding s-risk far outweigh all other concerns by any reasonable utilitarian logic.
Have some level of scepticism that the total "util count" from life on earth will be net-positive. I'm also in general wary of impact that is too long-term speculative.
Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.
I trust my kids and grandkids to solve their own problems in the future.
I don't trust our generation to make sure our kids and grandkids survive.
Avoiding extinction is the urgent priority; all else can wait. (And, life is already getting better at a rapid rate for the vast majority of the world's people. We don't face any urgent or likely extinction risks other than technologies of our own making.)
I don’t believe that on the margin or otherwise, since the ancient hominids, we as an evolving species have not searched for value rather than extinction. Can we believe that Nutcracker man, sitting on the plains cracking away at grass seeds was contemplating that maybe extinction would be better than eating seeds for 8 hours a day? Or that the first bipedal hominids thought, you know what, this whole knee and back pain thing is just not worth the bother, maybe we should just give up?
No, I believe that even on the margin, we care about a better future. It is not just about reducing the chance of extinction, but an essential part of human existence to endure.