On the margin1, it is better to work on reducing the chance of our2extinction, than increasing the value of futures where we survive3
KY
N
P
C
M
NK
Y
I
MZ
P
SK
DS
AM
B
I
TG
ZM
OCB
CB
DM
HH
V
VH
AA
A
EN
R
WC
WM
JS
MB
NKD
BM
G
GF
M
PY
CH
M
AM
F
M
MR
AWE
PH
P
K
NR
EG
J
WH
M
RR
S
T
J
T
A
C
C
L
N
R
SC
DR
MJ
PW
C
EA
JW
G
J
SP
YY
JC
K
G
JS
VK
A
L
RB
SB
DM
DM
T
JF
KL
TS
BG
GA
ID
TVD
W
AWZ
C
CL
E
HC
T
Q
W
L
B
CC
CJ
E
JHN
M
R
V
O
CK
Z
J
SH
SM
C
GM
JDR
J
N
PB
B
AZ
G
JH
LR
M
OS
C
W
JM
KM
M
JJ
S
C
T
AM
TSL
CS
SS
J
H
M
T
Disagree
Agree
Existential Choices Debate Week
March 17 - 24

Debate week: "On the margin, it is better to work on reducing the chance of our extinction, than increasing the value of futures where we survive". Turn your phone sideways or go on desktop to vote. 

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Counting people is hard. Here are some readings I've come across recently on this, collected in one place for my own edification:  1. Oliver Kim's How Much Should We Trust Developing Country GDP? is full of sobering quotes. Here's one: "Hollowed out by years of state neglect, African statistical agencies are now often unable to conduct basic survey and sampling work... [e.g.] population figures [are] extrapolated from censuses that are decades-old". The GDP anecdotes are even more heartbreaking 2. Have we vastly underestimated the total number of people on Earth? Quote: "Josias Láng-Ritter and his colleagues at Aalto University, Finland, were working to understand the extent to which dam construction projects caused people to be resettled, but while estimating populations, they kept getting vastly different numbers to official statistics. To investigate, they used data on 307 dam projects in 35 countries, including China, Brazil, Australia and Poland, all completed between 1980 and 2010, taking the number of people reported as resettled in each case as the population in that area prior to displacement. They then cross-checked these numbers against five major population datasets that break down areas into a grid of squares and estimate the number of people living in each square to arrive at totals... According to their analysis, the most accurate estimates undercounted the real number of people by 53 per cent on average, while the worst was 84 per cent out." 3. David Nash's Nigeria's Missing 50 Million People argues that (quote) "Nigeria's official population (~220-230 million) may be significantly inflated and could be closer to 170-180 million (another article claims 120 million) likely driven by political and financial incentives for states". The comments are insightful too, e.g. David's comment that Uganda and Burkina Faso have the opposite problem ("in Burkina Faso the issue was that GDP per capita numbers were calculated from industrial output divided by po
I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV. One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this).  A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear strong links. (Board members, and employees often go between these orgs). In addition, it clearly seems like OP/GV wants certain separation to help from their side. The close link means that problems with EA often spills over to the reputation of OP/GV.  I'd love to see some other EA donors and community members step up here. I think it's kind of damning how little EA money comes from community members or sources other than OP right now. Long-term this seems pretty unhealthy.  One proposal is to have some "mini-CEA" that's non-large-donor funded. This group's main job would be to understand and act on EA interests that organizations funded by large donors would have trouble with.  I know Oliver Habryka has said that he thinks it would be good for the EA Forum to also be pulled away from large donors. This seems good to me, though likely expensive (I believe this team is sizable). Another task here is to have more non-large-donor funding for CEA.  For large donors, one way of dealing with potential conflicts of interest would be doing funding in large blocks, like a 4-year contribution. But I realize that OP might sensibly be reluctant to do this at this point.  Also, related - I'd really hope that the EA Infrastructure Fund could help here, but I don't know if this is possible for them. I'm dramatically more excited about large long-term projects on making EA more community-driven and independent, and/or well-m
Clarifying "Extinction" I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish: * Human extinction — the population of Homo sapiens, or members of the human lineage (including descendant species, post-humans, and human uploads), goes to 0. * Total extinction — the population of Earth-originating intelligent life goes to 0. Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological life evolving higher intelligence and building a (say) Gorilla sapiens civilisation. The debate week prompt refers to total extinction. I think this is conceptually cleanest. But it’ll trip people up as it means that most work on AI safety and alignment is about “increasing the value of futures where we survive” and not about “reducing the chance of our extinction” — which is very different than how AI takeover risk has been traditionally presented.  I.e. you could be strongly in favour of "increasing value of futures in which we survive" and by that mean that the most important thing is to prevent the extinction of Homo sapiens at the hands of superintelligence. In fact, because most work on AI safety and alignment is about “increasing the value of futures where we survive”, I expect there  won’t be that many people who properly understand the prompt and vote “yes”. So I think  we might want to make things more fine-grained. Here are four different activities you could do (not exhaustive): 1. Ensure there’s a future for Earth-originating intelligent life at all. 2. Make human-controlled futures better. 3. Make AI-controlled futures better. 4. Make human-controlled futures more likely.  For short, I’ll call these activities: 1. Future at all. 2. Better human futures. 3. Better AI futures. 4. More human futures. I expect a lot more interesting disagreement over which of (1)-(4) is highest-priority
I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions. There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines. And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("academic" in the bad sense of the word). I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options. Is this question really worth discussing and publicly highlighting when really getting more buy in into existential risk prevention work broadly construed would be extremely desirable and naturally, in the main, both reduce extinction risk and increase the quality of futures where we survive?
Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable? If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff. I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.