I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.
After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.
Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
Intuitively, it seems we should respond differently depending on which of these three possibilities is true:
From an act consequentialist perspective, these differences do not matter intrinsically, but they are still instrumentally relevant.[1]
I don't mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven't followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).
By the way, the name is ‘Jaime’, not ‘Jamie’. The latter doesn't exist in Spanish and the two are pronounced completely differently (they share one phoneme out of five, when aligned phoneme by phoneme).
(I thought I should mention it since the two names often look indistinguishable in written form to people who are not aware that they differ.)
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about "premature" extinction).
On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed "over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?
The extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely.
Extremely unlikely to happen... when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
Here's another summary. I used Gemini 2.0 Flash (via the API), and this prompt:
The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.
80k has made important contributions to our thinking about career choice, as seen e.g. in their work on replaceability, career capital, personal fit, and the ITN framework. This work does not assume a position on the neartermism vs. longtermism debate, so I think the author’s neartermist sympathies can’t fully explain or justify the omission.
Scott Alexander introduces the ‘noncentral fallacy’ as follows: “X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."
This post seems like an archetypal instance of the noncentral fallacy.