We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post

When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed). 

However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own. 

If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner. 

  1. ^

     ‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.

  2. ^

     ‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)

  3. ^

     Through means other than extinction risk reduction.  

25

0
0

Reactions

0
0
Comments86
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Linch
2
0
0
29% agree

mostly because of tractability than any other reason

Peter Wildeford
13
2
1
43% disagree

I don't think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn't come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.

I think that without knowing people's assessment of extinction risk (e.g. chance of extinction over the next 5, 10, 20, 50, 100 years)[1], the answers here don't provide a lot of information value. 

I think a lot of people on the disagree side would change their mind if they believed (as I do) that there is a >50% chance of extinction in the next 5 years (absent further intervention).

Would be good if there was a short survey to establish such background assumptions to people's votes.

  1. ^

    And their assessment of the chance that AI successors will be mora

... (read more)
zdgroff
9
0
1
36% disagree

The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.

The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives... (read more)

Survival feels like a very low bar to me. 

Survival could mean the permanent perpetuation of extreme suffering, human disempowerment, or any number of losses of our civilization's potential.

Jim Chapman
1
0
0
14% disagree

I lean toward more work on improving conditions if we survive, but noting that you have to survive to benefit.

Bella
8
5
0
29% agree

I agreed, but mostly because of my unfortunately-dim view of the tractability of work increasing the value of futures where we survive.

2
Bella
By the way, it looks like there might be some problem with the Forum UI here, as this post has some text suggesting that, since writing this comment, I changed my mind from "29% agree" to "14% agree." But I haven't intentionally changed my vote on the top banner, or changed my mind.
2
Toby Tremlett🔹
That's odd - to me I'm still just seeing your first vote. I'll send this to Will to check out. 
2
Will Howard🔹
Thanks for flagging this @Bella! There was a bug which made it update everyone's comment whenever anyone changed their vote 🤦‍♂️. This is fixed now
2
NickLaing
Me too Bella, see my comment above
WilliamKiely
5
1
0
57% agree

Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my "fairly strongly agree" answer.

2
Toby Tremlett🔹
Thanks - yep I think this is becoming a bit of an issue (it came up a couple times in the symposium as well). I might edit the footnote to clarify - worlds with morally valuable digital minds should be included as a non-extinction scenario, but worlds where an AI which could be called "intelligent life" but isn't conscious/ morally valuable takes over and humans become extinct should count as an extinction scenario. 
2
Owen Cotton-Barratt
Ughh ... baking judgements about what's morally valuable into the question somehow doesn't seem ideal. Like I think it's an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction. Also like: what if you have a world which is like the one you describe as an extinction scenario, but there's a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario? I'd kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.

To some extent I reject the question as not-super-action-guiding (I think that a lot of work people do has impacts on both things).

But taking it at face value, I think that AI x-risk is almost all about increasing the value of futures where "we" survive (even if all the humans die), and deserves most attention. Literal extinction of earth-originating intelligence is mostly a risk from future war, which I do think deserves some real attention, but isn't the main priority right now.

2
Robi Rahman
Hope I'm not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
4
Toby Tremlett🔹
I think Owen is voting correctly Robi - he disagrees that there should be more work on extinction reduction before there is more work on improving the value of the future. (to complicate this, he is understanding working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction).  Apologies if the "agree" "disagree" labelling is unclear - we're thinking of ways to make it more parsable. 
2
Robi Rahman
Ah yes I get it now. Thanks!
2
Toby Tremlett🔹
No worries!
2
Owen Cotton-Barratt
This is right. But to add even more complication: * I think most AI x-risk (in expectation) doesn't lead to human extinction, but a noticeable fraction does * But a lot even of the fraction that leads to human extinction seems to me like it probably doesn't count as "extinction" by the standards of this question, since it still has the earth-originating intelligence which can go out and do stuff in the universe * However, I sort of expect people to naturally count this as "extinction"? Since it wasn't cruxy for my rough overall position, I didn't resolve this last question before voting, although maybe it would get me to tweak my position a little.
OscarD🔸
4
1
0
29% disagree

I think there are a lot of thorny definitional issues here that make this set of issues not boil down that nicely to a 1D spectrum. But overall extinction prevention will likely have a far broader coalition supporting it, while making the future large and amazing is far less popular since most people aren't very ambitious with respect to spreading flourishing through the universe, but I tentatively am.

Given cluelessness, it seems waaay less robust and likely to succeed in pulling off a trajectory shift 

Parker_Whitfill
1
0
0
29% disagree

Roughly buy that there is more "alpha" in making the future better because most people are not longtermist but most people do want to avoid extinction. 

Nithin Ravi🔸
1
0
0
50% disagree

Not sure that existence is net positive in expectation, but improving futures seems more robust!

Tom Billington
1
0
0
43% disagree

Have some level of scepticism that the total "util count" from life on earth will be net-positive. I'm also in general wary of impact that is too long-term speculative. 

Joseph_Chu
1
0
0
7% disagree

Extinction being bad assumes that our existence in the future is a net positive. There's the possibility for existence to be net negative, in which case extinction is more like a zero point.

On the one hand, negativity bias means that all other things being equal, suffering tends to outweigh equal happiness. On the other hand, there's a kind of progress bias where sentient actors in the world tend to seek happiness and avoid suffering and gradually make the world better.

Thus, if you're at all optimistic that progress is possible, you'd probably assume that ... (read more)

I don't think it's valuable to ensure future moral patients exist for their own sake, and extinction risk reduction only really seems to expectably benefit humans who would otherwise die in an extinction event, who would be in the billions. An astronomical number of future moral patients could have welfare at stake if we don't go extinct, so I'd prioritize them on the basis of their numbers.

See this comment and thread.

Charlie Harrison
4
1
0
64% disagree

If we treat digital minds like current animal livestock, the expected value of the future could be really bad. 

Derek Shiller
4
0
2
93% disagree

People in general, and not just longtermist altruists, have reason to be concerned with extinction. It may turn out not to be a problem or not be solvable and so the marginal impact seems questionable here. In contrast, few people are thinking about how to navigate our way to a worthwhile future. There are many places where thoughtful people might influence decisions that effectively lock us into a trajectory.

2
OllieBase
This might be true on the kinds of scales EAs are thinking about (potentially enourmous value, long time horizons) but is it not the case that many people want to steer humanity in a better direction? E.g. the Left, environmentalists, libertarians, ... ~all political movements? I worry EAs think of this as some unique and obscure thing to think about, when it isn't. (on the other hand, people neglect small probabilities of disastrous outcomes)
2
Derek Shiller
Lots of people think about how to improve the future in very traditional ways. Assuming the world keeps operating under the laws it has been for the past 50 years, how do we steer it in a better direction? I suppose I was thinking of this in terms of taking radical changes from technology development seriously, but not in the sense of long timelines or weird sources of value. Far fewer people are thinking about how to navigate a time when AGI becomes commonplace than are thinking about how to get to that place, even though there might not be a huge window of time between them.

Two major reasons/ considerations:
1- I'm unconvinced of the tractability of non-extinction-risk reducing longtermist interventions. 
2- Perhaps this is self-defeating - but I feel uncomfortable substantively shaping the future in ways that aren't merely making sure it exists. Visions of the future that I would have found un-objectionable a century ago would probably seem bad to me today. In short - this consideration is basically "moral uncertainty". I think extinction-risk reduction is, though not recommended on every moral framework, at least recommended on most. I haven't seen other ideas for shaping the future which are as widely recommended. 

Most people have a strong drive to perpetuate humanity. What makes EAs special is that EAs also care about others' suffering. So EAs should focus on trying to make sure the future isn't full of suffering.

Mjreard
3
1
0
21% agree

I think people overrate how predictable the effect of our actions on the future will be (even though they rate it very low in absolute terms); extinction seem like one of the very few (only?) things that seems like its effects will endure throughout a big part of the future. Still buy the theory that 0-1% of possible value is equally valuable to 98-99%; just about tractability

niplav
3
1
0
71% ➔ 64% agree

Under moral uncertainty, many moral perspectives care much more about averting downsides than producing upsides.

Additionally, tractability is probably higher for extinction-level threats, since they are "absorptive"; decreasing the chance we end up in one gives humanity and their descendants ability to do whatever they figure out is best.

Finally, there is a meaningful sense in which working on improving the future is plagued by questions about moral progress and lock-in of values, and my intuition is that most interventions that take moral progress serious... (read more)

I genuinely have no idea. 

JoA🔸
3
2
0
43% disagree

This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete. 

It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood... (read more)

Buck
2
0
1
50% agree

I think increasing the value of good futures is probably higher importance, but much less tractable

jboscw
3
1
0
93% agree

Very worried about AI risk, think short timelines are plausible

Aidan Alexander
3
1
2
79% disagree

Far from convinced that continued existence at currently likely wellbeing levels is a good thing

Tejas Subramaniam
3
1
0
21% ➔ 7% agree

I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain ... (read more)

I think human extinction (from ASI) is highly likely to happen, and soon, unless we stop it from being built[1]

See my comments in the Symposium for further discussion

  1. ^

    And that the ASI that wipes us out won't matter morally (to address footnote 2 on the statement)

I really don't have a strong view, but I find myself sympathetic to the idea that the world is not great and is getting worse (if you consider non-human animals).

Broadly Agree

Although I might have misunderstood and missed the point of this entire debate, so correct me if that is the case

I just don't believe  changing the future trajectory is tractable in say 50-100 years from now areas like politics, economics, AI welfare etc. I think its a pipe dream. We cannot predict technological, political and economic changes even in the medium term future. These changes may well quickly render our current efforts meaningless in 10-20 years. I think the effect of work we do now which is future focused diminishes in value... (read more)

Robi Rahman
2
0
0
93% agree

On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.

(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)

Manuel Allgaier
2
0
0
43% disagree

Depends, if x-risk is small (<5%) and if we expect outsized impact on the future (preventing negative value lock-ins), then the latter seems more important. I'm very unsure about both. 

OllieBase
2
0
0
36% agree

It seems plausible to me we might be approaching a "time of perils' where total x-risk is unacceptably high and will continue to be as we develop powerful AI systems, but might decrease later since we can use AI systems to tackle x-risk (though that seems hard and risky in its own myriad ways).

Broadly think we should still prioritise avoiding catastrophes in this phase, and bet on being able to steer later but low confidence.

Ozzie Gooen
2
0
0
29% agree

I have mixed feelings here. But one major practical worry I have about "increasing the value of futures" is that a lot of that looks fairly zero-sum to me. And I'm scared of getting other communities to think this way. 

If we can capture 5% more of the universe for utilitarian aims, for example, that's 5% less from others. 

I think it makes sense for a lot of this to be studied in private, but am less sure about highly public work.

Matthew_Barnett
*2
0
0
71% disagree

In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.

2
Pablo
Extremely unlikely to happen... when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
2
Matthew_Barnett
In my comment I later specified "in [the] next century" though it's quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don't currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards. (I just edited the original comment to make this slightly clearer.)
4
Pablo
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about "premature" extinction). On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed "over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?
2
Matthew_Barnett
I tentatively agree with your statement that, That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I'd put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as "extremely low" or "very low" in this context). To justify this statement, I would highlight several key factors: 1. Throughout hundreds of millions of years, complex life has demonstrated remarkable resilience. Since the first vertebrates colonized land during the late Devonian period (approximately 375–360 million years ago), no extinction event has ever eradicated all species capable of complex cognition. Even after the most catastrophic mass extinctions, such as the end-Permian extinction and the K-Pg extinction, vertebrates rebounded. Not only did they recover, but they also surpassed their previous levels of ecological dominance and cognitive complexity, as seen in the increasing brain size and adaptability of various species over time. 2. Unlike non-intelligent organisms, intelligent life—starting with humans—possesses advanced planning abilities and an exceptional capacity to adapt to changing environments. Humans have successfully settled in nearly every climate and terrestrial habitat on Earth, from tropical jungles to arid deserts and even Antarctica. This extreme adaptability suggests that intelligent life is less vulnerable to complete extinction compared to other complex life forms. 3. As human civilization has advanced, our species has become increasingly robust against most types of extinction events rather than more fragile. Technological progress has expanded our ability to mitigate threats, whether they come from natural disasters or disease. Our massive global population further reduces the likelihood that any single event could exterminate every last human, while our growing capacity to detect and neutralize threats makes us
finm
2
0
0
64% disagree

Partly this is because I think “extinction” as defined here is very unlikely (<<1%) to happen this century, which upper bounds the scale of the area. I think most “existential risk” work is not squarely targeted at avoiding literal extinction of all Earth-originating life.

Angelina Li
2
0
0
14% disagree

I feel extremely unsure about this one. I'm voting slightly against purely from the perspective of, "wow, there are projects in that direction that feel super neglected".

I'm optimistic about the very best value-increasing research/interventions. But in terms of what would actually be done at the margin, most work that people would do for "value-increasing" reasons would be confused/doomed, I expect (and this is less true for AI safety).

Nathan Young
2
0
0
21% agree

I dunno, by how much? Seems contingent on lots of factors. 

Matrice Jacobine
2
0
0
1
36% ➔ 29% disagree

Recent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn't conducive to robust multilateral governance).

John Salter
2
1
6
29% disagree

The far future, on our current trajectory, seems net negative on average. Reducing extinction risk just multiplies its negative EV. 

Tom Gardiner
2
2
1
57% disagree

The long-running future seems like it could well be unacceptably awful. From the perspective of a battery hen, it would seem much better that it's distant ancestors were pushed out of an ecological niche before humanity domesticated them. Throwing all our effort into X-risk mitigation without really tackling S-risks in a world of increasing lock-in across domains seems deeply unwise to me. 

I think avoiding existential risk is the most important thing. As long as we can do that and don't have some kind of lock in, then we'll have time to think about and optimize the value of the future.

Cameron Holmes
1
0
0
71% ➔ 64% disagree

AI NotKillEveryoneism is the first order approximation of x-risk work.
 

I think we probably will manage to make enough AI alignment progress to avoid extinction. AI capabilities advancement seems to be on a relatively good path (less foomy) and AI Safety work is starting to make real progress for avoiding the worst outcomes (although a new RL paradigm, illegible/unfaithful CoT could make this more scary).

Yet gradual disempowerment risks seem extremely hard to mitigate, very important and pretty neglected. The AI Alignment/Safety bar for good outcomes c... (read more)

2
Greg_Colbourn ⏸️
What makes you think this? Every technique there is is statistical in nature (due to the nature of the deep learning paradigm), and none are even approaching 3 9s of safety and we need something like 13 9s if we are going to survive more than a few years of ASI. I also don't see how it's less foomy. SWE bench and ML researcher automation are still improving - what happens when the models are drop in replacements for top researchers? What is the eventual end result after total disempowerment? Extinction, right?
1
Cameron Holmes
Digital sentience could also dominate this equation.
Dylan Richardson
1
0
0
36% ➔ 29% disagree

While I don't entirely disregard x-risks, I have been unimpressed by the tractability of most interventions, excepting perhaps for bio-security ones.

The prevalent notion of "solving" the alignment problem as though it's a particularly hard math problem strikes me as overly represented, which entails neglecting other, more in-direct safety measures, like stable, transparent and trustworthy institutions, whether political or geopolitical (US-China war means what for AI?).

Relatedly, the harm aversion/moral purity signaling around working in AI companies (espe... (read more)

JohnSMill
1
0
0
43% agree

I buy into MacAskill's argument that the 20th-21st centuries appear to be an era of heightened existential risk, and that if we can survive the development of nuclear, AI and engineered biology technologies there will be more time in the future to increase the value where we survive. 

JackM
2
0
1
36% disagree

This is a question I could easily change my mind on.

The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.

If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer - I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which ... (read more)

lilly
2
0
1
79% disagree

It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.

I find it very difficult to determine whether the future will be net-negative or net-positive (when considering humans, factory-farmed animals, wild animals, and possibly artificial sentience). 
This makes it very hard to know whether work on extinction reduction is likely to be positive or not.
I prefer to work on things that aim to move the sign towards "net-positive".

Intelligence is the only chance of some redemption to the massive suffering probably associated to the emergence of consciousness. 

This is the age of danger, we are the first species on Earth that has figured out morality, so we shall survive at allmost all cost. 

What is the point of securing a future for humanity if that future is net-negative?

Alistair Stewart
1
0
1
93% disagree

Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.

2
Greg_Colbourn ⏸️
Wait, how does your 93% disagree tie in with your support for PauseAI?

We could devote 100% of currently available resources to existential risk reduction, live in austerity, and never be finished ensuring our own survival.  However, if increase the value of futures where we survive, we will develop more and more resources that can then be put to existential risk reduction.  People will be not only happier, but also more capable and skilled, when we create a world where people can thrive rather than just survive.  The highest quality futures are the most robust.  

tylermjohn
1
0
0
36% disagree

I'm compressing two major dimensions into one here:

  • Expected value (I think EV of maxevas is vastly higher than EV of maxipok)
  • Robustness (the case for maxevas is highly empirically theory laden and values dependent)
2
Robi Rahman
What is maxevas? Couldn't find anything relevant by googling.

A higher value future reduces the chances of extinction.  If people value life, they will figure out how to keep it.

2
Greg_Colbourn ⏸️
Does this change if our chance of extinction in the next few years is high? (Which I think it is, from AI).
KonradKozaczek
1
0
0
14% disagree

It seems even more important to avoid futures full of extreme suffering than to avoid extinction.

First and foremost, I'm low confidence here. 

I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.

What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere) 

Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our ... (read more)

1
Christopher Clay
I've heard this argument before, but I find it un-compelling in its tractability. If we don't go extinct, its likely to be a silent victory; most humans on the planet won't even realise it happened. Individual humans working on X-risk reduction will probably only impact the morals of people around them.
Patrick Hoang
1
0
0
50% ➔ 57% disagree

I think the most likely outcome is not necessarily extinction (I estimate <10% due to AI) but rather an unfulfilled potential. This may be humans simply losing control over the future and becoming mere spectators and AI not being morally significant in some way.

With long timeline and less than 10% probability: Hot take is these are co-dependent - prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are "dimension-reducing" the human race to one single point, we i... (read more)

We can't work on increasing value if we are dead.

If we avoid extinction, plenty of people will have the time to take care of humanity's future. I'll leave it to them. Both topic have a lot of common ground anyway, like "not messing up with the biosphere" or "keeping control of ASI"

[comment deleted]4
0
0
Curated and popular this week
Relevant opportunities