I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate a...
I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.
I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phen...
Thanks for the clarification.
Yes, I agree that we should consider the long-term effects of each intervention when comparing them. I focused on the short-term effects of hastening AI progress because it is those effects that are normally cited as the relevant justification in EA/utilitarian discussions of that intervention. For instance, those are the effects that Bostrom considers in ‘Astronomical waste’. Conceivably, there is a separate argument that appeals to the beneficial long-term effects of AI capability acceleration. I haven’t considered this argument because I haven’t seen many people make it, so I assume that accelerationist types tend to believe that the short-term effects dominate.
I was trying to hint at prima facie plausible ways in which the present generation can increase the value of the long-term future by more than one part in billions, rather than “assume” that this is the case, though of course I never gave anything resembling a rigorous argument.
I do agree that the “washing out” hypothesis is a reasonable default and that one needs a positive reason for expecting our present actions to persist into the long-term. One seemingly plausible mechanism is influencing how a transformative technology unfolds: it seems that the firs...
I think it remains the case that the value of accelerating AI progress is tiny relative to other apparently available interventions, such as ensuring that AIs are sentient or improving their expected well-being conditional on their being sentient. The case for focusing on how a transformative technology unfolds, rather than on when it unfolds,[1] seems robust to a relatively wide range of technologies and assumptions. Still, this seems worth further investigation.
Indeed, it seems that when the transformation unfolds is primarily important because of
Recently, an OP grantee spent 5 hours of staff time opening this Chase Platinum Business Checking account which swept into this JPMorgan US Treasury Plus Money Market Fund.
I tried to open a Chase Platinum Business Checking account for a 501(c)(3) that I created recently, but it appears that they are not available for nonprofits. If one selects “Corporation” in the dropdown menu on the left, one is forced to select either “S-Corporation” or “C-Corporation” in the dropdown menu on the right, neither of which, I believe, is appropriate for a nonprof...
Great post. A few months I wrote a private comment that makes a very similar point but frames it somewhat differently; I share it below in case it is of any interest.
...Victoria Krakovna usefully defines the outer and inner alignment problems in terms of different “levels of specification”: the outer alignment problem is the problem of aligning the ideal specification (the goals of the designer) with the design specification (the goal implemented in the system), while the inner alignment problem is the problem of aligning this design specification with the re
Retaliation is bad.
People seem to be using “retaliation” in two different senses: (1) punishing someone merely in response to their having previously acted against the retaliator’s interests, and (2) defecting against someone who has previously defected in a social interaction analogous to a prisoner’s dilemma, or in a social context in which there is a reasonable expectation of reciprocity. I agree that retaliation is bad in the first sense, but Will appears to be using ‘retaliation’ in the second sense, and I do not agree that retaliation is bad in this ...
Humanity’s chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanity’s long-term potential. It is not plausible—at least not for the reasons provided— that “GCR's may be many, many orders of magnitude more likely than” existential catastrophes, on reasonable interpretations of “many, many”.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
Scott Alexander's writing style is also worthy of broader emulation among EAs
I very much agree. Here’s a post by Scott with nonfiction writing advice.
The Summary of posts in the sequence alone was super useful. Perhaps the RP team would like to include it, or a revised version of it, in the sequence introduction?
Cool. I'd be interested in tentatively providing this search for free on EA News via the OpenAI API, depending on monthly costs. Do you know how to implement it?
The board’s behavior was grossly unprofessional
You had no evidence to justify that claim back when you made it, and as new evidence is released, it looks increasingly likely that the claim was not only unjustified but also wrong (see e.g. this comment by Gwern).
Does anyone know where I can find something like that?
You can take a look at the ‘Further reading’ section of criticism of effective altruism, the articles so tagged, and the other tags starting with “criticism of” in the ‘Related entries’ section.
I'm not sure how to react to all of this, though.
Kudos for being uncertain, given the limited information available.
(Not something one cay say about many of the other comments to this post, sadly.)
I still tend to agree the expected value of the future is astronomical (e.g. at least 10^15 lives), but then the question is how easily one can increase it.
If one grants that the time of perils will last at most only a few centuries, after which the per-century x-risk will be low enough to vindicate the hypothesis that the bulk of expected value lies in the long-term (even if one is uncertain about exactly how low it will drop), then deprioritizing longtermist interventions on tractability grounds seems hard to justify, because the concentration of total x-risk in the near-term means it's comparatively much easier to reduce.
Hi Vasco,
I can see the above applying for some definitions of time of perils and technological maturity, but then I think they may be astronomically unlikely.
What do you think about these considerations for expecting the time of perils to be very short in the grand scheme of things? It just doesn't seem like the probability of possible future scenarios decays nearly fast enough to offset their greater value in expectation.
Semi-tangential question: what's the rationale for making the reactions public but the voting (including the agree/disagree voting) anonymous?
Here’s an article by @Brian_Tomasik enumerating, and briefly discussing, what strike me as most of the relevant considerations for and against publishing in academia.
The ‘citability’ consideration also applies to Wikipedia, which requires that all claims be supported by “reliable sources” (and understands that notion quite narrowly). For example, many concepts developed by the rationalist and EA communities cannot be the subject of a Wikipedia article merely because they have not received coverage in academic publications.
I don't understand your underlying model of human psychology. Sam Bankman-Fried was super-rich and powerful, but is now the kind of person no one would touch with the proverbial ten-foot pole. If the claim is that humans tend to like super-rich and powerful people even after they become disgraced, that seems false based on informal evidence.
In any case, from what I know about Bankman-Fried and his actions, the claim that he did not knowingly steal customer money doesn't strike me as obviously false, and in line with my sense that much of his behavior is explained by a combination of gross incompetence and pathological delusion.
Effective Altruism News also offers this: just click ‘Run this search externally’ after typing your search query. We currently index ~350 sites.
Thanks for the feedback! Although I am no longer working on this project, I am interested in your thoughts because I am currently developing a website with Spanish translations, which will also feature a system where each tag is also a wiki article and vice versa. I do think that tags and wiki articles have somewhat different functions and integrating them in this way can sometimes create problems. But I'm not sure I agree that the right approach is to map multiple tags onto a single article. In my view, a core function of a Wiki is to provide concise defi...
But if they both use microdooms, they can compare things 1:1 in terms of their effect on the future, without having to flesh out all of the post-agi cruxes.
I don't think this is the case for all key disagreements, because people can disagree a lot about the duration of the period of heightened existential risk, whereas microdooms are defined as a reduction in total existential risk rather than in terms of per-period risk reduction. So two people can agree that AI safety work aimed at reducing existential risk will decrease risk by a certain amount over a g...
“the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.” (Nick Bostrom (2013) ‘Existential risk prevention as global priority’, Global Policy, vol. 4, no. 1, p. 19.)
Loneliness in a house share situation depends entirely on whether your housemates are good, and there is no guarantee that your co-workers are good housemates just because they are EA.
The most plausible version of this argument is not that someone will be a good housemate just because they are EA. It is that banning or discouraging EA co-living makes it more difficult for people to find any co-living arrangement.
Talking with some physicist friends helped me debunk the many worlds thing Yud has going.
Yudkowsky may be criticized for being overconfident in the many-worlds interpretation, but to feel that you have “debunked” it after talking to some physicist friends shows excessive confidence in the opposite direction. Have you considered how your views about this question would have changed if e.g. David Wallace had been among the physicists you talked to?
Also, my sense is that “Yud” was a nickname popularized by members of the SneerClub subreddit (one of the most i...
I'm very excited to see this.
One minor correction: Toby Ord assigns a 1-in-6 chance of existential catastrophe this century, which isn't equivalent to a 1-in-6 chance of human extinction.
Indeed. And there are other forecasting failures by Mearsheimer, including one in which he himself apparently admits (prior to resolution) that such a failure would constitute a serious blow to his theory. Here’s a relevant passage from a classic textbook on nuclear strategy:[1]
...In an article that gained considerable attention, largely for its resolute refusal to share the general mood of optimism that surrounded the events of 1989, John Mearsheimer assumed that Germany would become a nuclear power. Then, as the Soviet Union collapsed, he explained why it m
In my view, the comment isn't particularly responsive to the post.
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn't we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
To be clear, I was only trying to describe what I believe is going on here, without necessarily endorsing the relative neglect of this cause area. And it does seem many EA folk consider radical life extension a “speculative” way of improving global health, whether or not they are justified in this belief.
Welcome to the Forum!
I think part of the explanation relates to what you point out at the end: “radical life extension is speculative”. You note that “so is preventing risks from AI”, but preventing such risks seems to be higher impact than extending life. In general, causes vary both in how “speculative” they are and in what their expected impact is, and EA may be seen as an attempt to have the most impact for varying levels of “speculativeness”. One could argue that, while the impact of life extension as a cause is relatively high, its “speculativeness-a...
Having listened to several episodes, I can strongly recommend this podcast. One of the very best.
If "Eugenics-Adjacent" is not Torres but tipped off Torres about the article, that also seems like a good reason for downvoting the post, since it indicates that the username was chosen to cause damage to EA rather than to stimulate an honest discussion.
I can confirm it's open access. I know this because my team is translating it into Spanish and OUP told us so. Our aim is to publish the Spanish translation simultaneously with the English original (the translation will be published online, not in print form).
A recurrent disappointment with EA Forum comments related to the karma system is the frequency with which they assume that a particular explanation of the karma score of a given comment is correct, without even considering alternative explanations. One doesn't have to be very creative or contrarian to think of plausible reasons users may have disagreed-voted with the statement "CEA is widely perceived as grossly ineffective whereas AMF is perceived as highly effective" other than that "CEA staff have so much karma that they can tank any comment with a single downvote".
I'm puzzled by this reply. You acknowledge that you misremembered, but in your edited comment you continue to state that "here's a whole chapter in superintelligence on human intelligence enhancement via genetic selection".
The chapter in question is called "Paths to superintelligence", and is divided into five sections, followed by a summary. The sections are called "Artificial intelligence", "Whole brain emulation", "Biological cognition", "Brain–computer interfaces" and "Networks and organizations". It is evident that the whole chapter is not devoted to ...
Note that this is a different Peter Singer. Here is the entry for the relevant Peter Singer. According to it, Singer had 77 co-authors.
Note that this is true only of some subset of superforecasters. Samotsvety’s forecasters (many of whom are superforecasters) have shorter timelines than both domain experts and general x-risk experts: