This is a special post for quick takes by anoni. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

TL;DR silly critics to long-termism, can you convince me to keep donating to EA funds?

See positive arguments and introduction below. The following are supposed to be naive Critics/questions:

  1.  Utilitarianism critics. Even with the newer formulation of long-termism,
    1. Why should I care about people that won't exist? Say we go extinct, then what? (like in the movie "Her", this could be the smart choice). I'm more on the pro-abortion side of the discussion here. Why should X-risks be costly because of their opportunity cost and not because of the immediate suffering?
    2. Do we care only about humans? Isn't out there any silly argument like "we should cultivate X insect because there will be a lot of them and a lot of lives means a lot of happiness"?
    3. Definitely not clear what is good in the long run (nor what's good now). True or false?
  2. Privilege/anti-capitalism critics,
    1. This is more of a feeling. Is long-termism supporting capitalism? Doesn't the immediate solution of the world's most pressing problem require a systemic change? Is it the only discourse that can adapt to the current state of affairs instead of demanding an urgent change?
    2. On the same line, how are the preferences of non-privileged (e.g. poor) people taken into account when defining what is "effective"?
    3. Does this view incentivize the acceptance of suffering in the present moment and in future present moments?
    4. Is this view trying to solve the existential crisis arising when imagining a future with a sustainable but not growing (e.g. population) society?
  3. Other
    1. The expected discounted return seems more appropriate because the probabilities of changing our preferences grow with time. True or false?
    2. How something that feels that bad (ignoring current suffering to prioritize maybe future happiness) could be right? 
    3. Is this a justification to natural selfishness (e.g. not giving up all we "can" give up)?

 

Introduction:

Hi everyone! I have read some of the posts on long-termism. Maybe one hour of reading, which is obviously not enough given the time  devoted and depth achieved by people around here. However, I still feel the idea is terrible, which is why I compiled the previous naive list to have each point refuted. I definitely don't like seeing the funds giving money to already privileged people (arguably more than me), but I trust there is a good reason behind. Please add a "IMO" before each phrase.

The most compelling pro arguments for long-termism investing are: 1- it makes sense for good investments to be counter-intuitive (because they are the most neglected and they wouldn't be neglected if they were intuitive) and 2- long-termism is more about the coincidence between the correctness of globally good actions and the correctness of those actions in the long term future. 

Any Too Long Didn't Read - like answer will be very appreciated 

Hi - my intuitions fall in the other direction here, so I'm keen to explain why.  Implicit IMOs in front of everything here.

1:
    1.1:  I have a younger brother. My parents could have stopped at one, and my family would broadly still be happy, but my brother is generally happy and leads a good life. Similarly, if they'd had a third child they probably would have been happy and great too, and I would have loved them. All else being equal I wish that youngest sibling could have existed. IMO these two sentiments aren't meaningfully distinct. 

    1.2:  We don't only care about humans.  Sure, the argument for making more humans would apply to insects or something as well. However, most of the things that would kill all the humans would also kill everything else, so for me not letting that happen is still much more of a priority.

    1.3: True on the specifics, false more generally. I don't know exactly what the world should look like, but I'm pretty sure people being happy is good, more people being happy is better, and everything being unrecoverably dead is neutral at most. 

 

2:

    2.1: If we weren't potentially about to all die I'd be more willing to think about this, but we have to survive the next century or two first. Whether capitalism makes things better or worse for now depends much more on whether it makes us more or less likely to all die, than on anything else (again, for now). 

    2.2: I'm pretty sure non-privileged people also want to be alive and happy. 

    2.3: Possibly, and I'm ok with that. I'd rather live a worse life if it means my grandkids are more likely to survive and have happy ones. Although it's definitely better for everyone to be happier now, I feel like it doesn't amount to much if we all die in the next century. 

    2.4: If I can choose between a surviving but stable society, and a growing one, I would choose the growing one. But both are better than an empty rock, so the priority now is not dying either way.

 

3: 

    3.1: I'm pretty sure we'll continue to want to be alive and happy, so false. People can't decide what their preferences are, and work to fulfil them, if they don't exist.

    3.2: Our moral intuitions were built for very different-looking societies to where we are today. We like sugar and sex because we were supposed to go for fruit and reproduction; our moral intuitions aren't hugely different. IMO this is in a similar category to people caring more about saving one child than eight of them. 

    3.3: No. 

I should clarify 3.3. For me, longtermism is partly the acknowledgement of much vaster moral stakes - so long as there are things we can do to help, they're no less important to do as short-termist interventions. (The usual arguments about it not being helpful to demand too much of people still apply though).

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.

 

2.

2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.

 

3.

3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.

 

Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or