Bio

I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.

After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.


Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1220

Topic contributions
4125

Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about "premature" extinction).

On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed "over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?

The extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely.

Extremely unlikely to happen... when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.

Here's another summary. I used Gemini 2.0 Flash (via the API), and this prompt:

The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.

  1. Lack of leadership and accountability: He believes EA leadership is causing harm and lacks mechanisms for correcting course.
  2. Emphasis on PR and narrative control: He condemns EA organizations' risk aversion, guardedness, and attempts to control the narrative around FTX, prioritizing public image over transparency.
  3. Inadequate community health: He laments conformity pressures, fears of reprisal for dissent, and insufficient efforts to cultivate a culture of open disagreement.
  4. Entanglement with FTX: He faults EA leadership, particularly Will MacAskill, for endorsing Sam Bankman-Fried and entangling the movement with FTX despite warnings about SBF's character.
  5. Hero worship and lack of respect for intellectual leaders: He criticizes the hero worship of MacAskill, contrasting it with MacAskill's perceived lack of engagement with other intellectual leaders in the community. He sees this as part of a pattern of MacAskill prioritizing popularity and prestige over community health and epistemic integrity.
  6. Misleading communications and lack of transparency: He criticizes CEA for making inaccurate and misleading statements, for omitting crucial context in communications, and for concealing information about funding decisions.
  7. Scaling too quickly and attracting grifters: He worries that EA's rapid growth and increased funding attract deceptive actors and create perverse incentives.
  8. Overreliance on potentially compromised institutions: He expresses concerns about EA's deep ties to institutions like Oxford University, which may stifle intellectual exploration and operational capacity.
  9. Ignoring internal warnings about FTX: He reveals that he and others warned EA leadership about Sam Bankman-Fried's reputation for dishonesty, but those warnings went unheeded. He suggests he personally observed potentially illegal activities by SBF but chose not to share this information more widely.
  10. Flawed due diligence and poor judgment in grantmaking: He feels EA leadership's due diligence on SBF was inadequate and that they made poor judgments in providing him with substantial resources. He extends this criticism to grantmaking practices more generally.
  11. Unfair distribution of resources: He argues that the current distribution of funds within EA doesn't adequately compensate those doing object-level work and undervalues their contributions relative to donors. He argues for a system that recognizes the implicit tradeoff many have made in pursuing lower-paying EA-aligned careers.
  12. Centralized media policy and negative experiences with journalists: While supporting a less centralized media policy, he also cautions against interacting with journalists, as they frequently misrepresent interviewees and create negative experiences.
     

80k has made important contributions to our thinking about career choice, as seen e.g. in their work on replaceability, career capital, personal fit, and the ITN framework. This work does not assume a position on the neartermism vs. longtermism debate, so I think the author’s neartermist sympathies can’t fully explain or justify the omission.

Hello. As it happens, right now I'm editing an interview I conducted with @Jaime Sevilla two months ago. Things got delayed for a variety of reasons, but this episode should be out soon.

Answer by Pablo4
0
0

Methionine restriction has been shown to increase mean and maximum lifespan in various organisms, particularly rodents. Studies show it can increase lifespan by 30-40% in rats and mice, with effect sizes similar to those of calorie restriction. The lower methionine content of plant-based diets should be seen as a plus rather than a minus, I think.

Thanks for the useful exchange.

It may be useful to consider whether you think your comment would pass a reversal test: if the roles were reversed and it was an EA criticizing another movement, but the criticism was otherwise comparable (e.g. in tone and content), would you also have expressed a broadly positive opinion about it? If yes, that would suggest we are disagreeing about the merits of the letter. If no, it seems it’s primarily a disagreement about the standards we should adopt when evaluating external criticism.

Load more