Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

I've seen AI-based animal communication technologies starting to be involved in some EA events / discussions (e.g. https://www.earthspecies.org/ ). I'm worried these initiatives may be actively negative, and I'm wondering if anyone has / will articulate a stronger defense of why they're good?

The high-level argument I've heard is that communicating with animals will make humans be more empathetic towards them. But I don't see why this would be the most likely outcome:

  1. Humans are already fairly empathetic to animals, especially around things that we'd conside
... (read more)

Someone should write a good, linkable online resource describing the concept of the long reflection. It's very strange that there isn't a simple post/webpage that I can link to that gives a good, medium-depth description. 

Currently the best things are probably the EA Forum Topic page, and this list of quotes

There's now also the related concept of viatopia, which is maybe a better concept/term. Not sure what the very best links on that are but this one seems a good starting point.

Here's my current four-point argument for AI risk/danger from misaligned AIs. 

  • We are on the path of creating intelligences capable of being better than humans at almost all economically and militarily relevant tasks.
  • There are strong selection pressures and trends to make these intelligences into goal-seeking minds acting in the real world, rather than disembodied high-IQ pattern-matchers.
  • Unlike traditional software, we have little ability to know or control what these goal-seeking minds will do, only directional input.
  • Minds much better than humans at
... (read more)
Showing 3 of 4 replies (Click to show all)

I think that your list is really great! As a person who try to understand misaligned AI better, this is my arguments:

  • The difference between a human and an AGI might be greater than the difference between a human and a mushroom.
  • If the difference is that great, it will probably not make much difference between a cow and a human. The way humans treat other animals, the planet and each other makes it hard to see how we could possibly create AI alignment that is willing to save a creature like us.
  • If AGI has self-perservation, we are the only creatures that can
... (read more)
2
Charlie_Guthmann
2 thoughts here just thinking about persuasiveness. I'm not quite sure what you mean by normal people and also if you still want your arguments to be actually arguments or just persuasion-max.  * show don't tell for 1-3 * For anyone who hasn't intimately used frontier models but is willing to with an open mind, I'd guess you should just push them to use and actually engage mentally with them and their thought traces, even better if you can convince them to use something agentic like CC. * Ask and/or tell stories for 4 * What can history tell us about what happens when a significantly more tech savy/powerful nation finds another one? * no "right" answer here though the general arc of history is that significantly more powerful nations capture/kill/etc. * What would it be like to be a native during various european conquests in the new world (esp ignoring effects of smallpox/disease to the extent you can)? * Incan perspective? Mayan? * I especially like Orellena's first expedition down the amazon. As far as I can tell, Orellena was not especially bloodthirsty, had some interest/respect for natives. Though he is certainly misaligned with the natives. * Even if Orellana is “less bloodthirsty,” you still don’t want to be a native on that river. You hear fragmented rumors—trade, disease, violence—with no shared narrative; you don’t know what these outsiders want or what their weapons do; you don’t know whether letting them land changes the local equilibrium by enabling alliances with your enemies; and you don’t know whether the boat carries Orellana or someone worse. * Do you trade? attack? flee? coordinate? Any move could be fatal, and the entire situation destabilizes before anyone has to decide “we should exterminate them.” * and for all of these situations you can actually see what happened (approximately) and usually it doesn't end well. * Why is AI different? * not rhetorical and gives them space to think in
2
Linch
Hmm right now this seems wrong to me, and also not worth going into in an introductory post. Do you have a sense that your view is commonplace? (eg from talking to many people not involved in AI)

I’m pro-nuclear, but the commonly used EA framing of “nuclear is overregulated” seems net negative more often than not. Clearer Thinking’s new nuclear episode is one of the more epistemically rigorous discussions I’ve heard in EA-adjacent spaces (and Founders Pledge has also done nuanced work).

Nuclear is worth pursuing, but we should argue for it clear-eyed.

Showing 3 of 6 replies (Click to show all)
3
Benevolent_Rain
Good question. I agree: people in EA who’ve actually worked on nuclear don’t usually claim over-regulation is the only or even dominant driver of the cost/buildout problem. What I’m reacting to is more the “hot take” version that shows up in EA-adjacent podcasts — often as an analogy when people talk about AI policy: “look at nuclear, it got over-regulated and basically died, so don’t do that to AI.” In that context it’s not argued carefully, it’s just used as a rhetorical example, and (to me) it’s a pretty lossy / misleading compression of what’s going on. So I’m not trying to call out serious nuclear work in EA — I’m mostly sharing the Clearer Thinking episode as a good “orientation reset” because it keeps pointing back to what the binding constraints plausibly are, with regulation as one (maybe not even the main) piece of a complex situation. Also possible I’m misremembering some of the specific instances — I haven’t kept notes — but I’ve heard the framing enough that it started to rub me the wrong way. And I’m genuinely curious where you land on the “regulatory reform is necessary” point: do you think the key thing is removing regulation, changing it, or adding policy/market design (e.g. electricity market reform / stable revenue mechanisms / valuing clean firm power)? I’m currently leaning toward “markets/revenue model is the real lever”, but I’m not confident.  One thing I loved reading was a model of Sweden’s total system cost with vs without nuclear (incl. stuff like transmission build-out). It suggested fairly similar overall cost in both worlds — but the nuclear-heavy system leaned more on established tech (less batteries, etc., and I don’t remember if demand response was included). My read is that the real challenge is: even if total system costs are comparable, how do you actually allocate those costs and rewards in something resembling a market so the “good” system gets built? (Unless you go much more "total state-owned super regulated" and basica
10
jackva
  I agree it's a bit lossy and sometimes reflexive (this is what I meant with relying on libertarian priors), but I am still confused about your argument. Because the argument you criticize is an historical one ("nuclear over regulation killed nuclear") which is different from "now we need many steps and there are different strategies to make nuclear more competitive again". I think it is basically correct that over-regulation played a huge part in making nuclear uncompetitive and I don't think that Isabelle or others knowing the history of nuclear energy would disagree with that, even if it might be a bit overglossed / stylized (obviously, it is not the only thing).

Ah, now I see - thanks for clarifying. Yes historically I do not know how much each set-back to nuclear mattered. I can see that e.g. constantly changing regulation, for example during builds (which I think Isabelle actually mentioned) could cause a significant hurdle for continuing build-out. Here I would defer to other experts like you and Isabelle.

Porting this over to "we might over regulate AI too", I am realizing it is actually unclear to me whether people who use the "nuclear is over regulated" example means the literal same "historical" thing could ... (read more)

U.S. Politics should be a main focus of US EAs right now. In the past year alone, every major EA cause area has been greatly hurt or bottlenecked by Trump. $40 billion in global health and international development funds was lost when USAID shut down, which some researchers project could lead to 14 million more deaths by 2030. Trump has signed an Executive Order that aims to block states from creating their own AI regulations, and has allowed our most powerful chips to be exported to China. Trump has withdrawn funding from, and U.S. support for, internatio... (read more)

Showing 3 of 5 replies (Click to show all)
6
Mjreard
I agree. Basically anyone not in a politically sensitive role (this category is broader than it might intuitively seem) should be looking to make large donations in this area now and others should be reaching out to EAs focused on US politics if they feel well equipped to run or contribute to a high leverage project. Unfortunately there is no AMF/GiveDirectly for politics and most things you can donate too are very poorly leveraged. Likewise it is hard to both scope a leveraged project and execute well on it. I know of one general exception at the moment which I'm happy to recommend privately. I'm also happy to speak to anyone who intends to devote considerable money or work resources to this and pass them along to the people doing the best work here if that makes sense. 
9
Ebenezer Dukakis
On the bright side, we might end up getting an AI pause out of this, if the Netherlands wakes up and decides that it no longer wants to help supply chips for advanced AI which could either be (a) misaligned or (b) controlled by Trump. See previous discussion, protest. I reckon this moment represents a strong opportunity for Dutch EAs concerned with AI risks. Maybe get a TV interview where you explain how ASML is supplying chips to the US, then explain AI risk, etc. In terms of red-teaming my own suggestion, I am somewhat worried about further politicizing the issue of AI / highlighting national rivalries. Seems best to push for symmetric restrictions on China--they are directly supplying materials to Russia for its war in Ukraine, after all. Eliezer Yudkowsky could be an interesting person to contact for red-teaming purposes, since he's strongly in favor of an AI pause, but also seems to resist any "international rivalry" framing of AI risk concerns?

More good news! Norwegian meat industry announced that they will stop using fast-growing chicken breeds by the end of 2027. These breeds are source of immense suffering due to the toll such rapid growth takes on animal's body.

This will be the first country to stop using them.

More here: https://animainternational.org/blog/norway-ends-fast-growing-chickens

Reminder: claim tax relief on charitable donations (UK PAYE taxpayers)

If you:

  1. Pay the higher rate of tax in the UK (earn over £50,271 or £43,663 in Scotland)
  2. Don’t fill in a Self Assessment tax return (you pay tax automatically via PAYE)
  3. Made donations that you claimed Gift Aid on in this or any of the previous 4 tax years

You can use this HMRC link to tell HMRC how much you’ve donated excluding Gift Aid and claim back the difference. More details in this evergreen post.

Practical tip

I set my Giving What We Can pledge tracking to run from 6 April to 5 April, wh... (read more)

The European Parliament recently submitted a parliamentary question on wild animal welfare! The question focuses on human caused wild animal suffering and such questions generally don't have policy implications - but still, was surprised to see this topic being taken up in policy discourse.

https://www.europarl.europa.eu/doceo/document/E-10-2025-004965_EN.html

Are there any signs of governments beginning to do serious planning for the need for Universal Basic Income (UBI) or negative income tax...it feels like there's a real lack of urgency/rigour in policy engagement within government circles. The concept has obviously had its high-level advocates a la Altman but it still feels incredibly distant as any form of reality. 

Meanwhile the impact is being seen in job markets right now - in the UK graduate job opening have plummeted in the last 12 months. People I know are having a hard enough time finding jobs w... (read more)

4
huw
Hmm, I think that’s not the right framing for this. UBI is just not settled as a universally good idea in academic or political circles (sorry, no definitive citation for this), let alone that there’s an urgent unemployment crisis (the statistic I think you’re citing is for job openings, not actual employment rates) or that such a crisis, if it did exist, has structural causes which could be expected to increase (i.e. it might not be AI, nor should we necessarily expect AI to become orders of magnitude more advanced in the next 5 years; there was plausibly a very different shock to the global economic system beginning around Liberation Day, 2025).

Thanks for your take - I always appreciate slightly less doom and gloom perspectives.

On your point that there's not an imminent unemployment crisis and what impacts we are seeing may be due to other factors. Firstly I think it's inevitable that the direct causes of disruption to the labour market are going to be multifaceted given the current trajectory of global markets (de-coupling, de-globalisation etc.)whatever happens moving forward. In the UK specifically part of the issue is minimum wage has been increased, making employers less inclined to hire gra... (read more)

Technical Alignment Research Accelerator (TARA) applications close today!

Last chance to apply to join the 14-week, remotely taught, in-person run program (based on the ARENA curriculum) designed to accelerate APAC talent towards meaningful technical AI safety research.

TARA is built for you to learn around full-time work or study by attending meetings in your home city on Saturdays and doing independent study throughout the week. Finish the program with a project to add to your portfolio, key technical AI safety skills, and connections across APAC.

See this ... (read more)

Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:

  • All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us.
  • The Fermi Paradox: there don't seem to be any other civilizations in the observable
... (read more)
Showing 3 of 5 replies (Click to show all)
18
Yarrow Bouchard 🔸
All the things you mentioned aren’t uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.) The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis. Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test people’s faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isn’t really there. (I also think it’s just not true that this is the most interesting point in history.) Similarly, the book of Genesis says that God created humans in his image. Maybe he didn’t create aliens with high-tech civilizations because he’s only interested in beings with high technology made in his image.  It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis. You might think the simulation hypothesis is preferable because it’s a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulators’ reality, maybe the fundamental relationship between consciousne
7
Joseph_Chu
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I've seen in quite a while, which was refreshing for my peace of mind. That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn't seem that great, but it seems plausible at least? Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.

Changing the simulation hypothesis from a simulation of a world full of people to a simulation of an individual throws the simulation argument out the window. Here is how Sean Carroll articulates the first three steps of the simulation argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people
... (read more)

I made this simple high-level diagram of critical longtermist "root factors", "ultimate scenarios", and "ultimate outcomes", focusing on the impact of AI during the TAI transition.



This involved some adjustments to standard longtermist language. 
"Accident Risk" -> "AI Takeover
"Misuse Risk" -> "Human-Caused Catastrophe" 
"Systemic Risk" -> This is spit up into a few modules, focusing on "Long-term Lock-in", which I assume is the main threat. 

You can read interact with it here, where there are (AI-generated) descriptions and pages for t... (read more)

2
Charlie_Guthmann
Just finding about about this & crux website. So cool. Would love to see something like this for charity ranking (if it isn't already somewhere on the site).  Don't you need a philosophy axioms layer between outputs and outcomes? Existential catastrophe definitions seems to be assuming a lot of things.  Would also need to think harder about why/in what context i'm using this but "governance" being a subcomponent when it's arguably more important/ can control literally everything else at the top level seems wrong. 

Good points!

>Would love to see something like this for charity ranking (if it isn't already somewhere on the site). 
I could definitely see this being done in the future.

>Don't you need a philosophy axioms layer between outputs and outcomes?
I'm nervous that this can get overwhelming quickly. I like the idea of starting with things that are clearly decision-relevant to the certain audience the website has, then expanding from there. Am open to ideas on better / more scalable approaches! 

>"governance" being a subcomponent when it's arguably... (read more)

According to someone I chatted to at a party (not normally the optimal way to identify top new cause areas!) fungi might be a worrying new source of pandemics because of climate change.

Apparently this is because thermal barriers prevented fungi from infecting humans, but because fungi are adapting to higher temperatures, they are now better able to overcome those barriers. This article has a bit more on this:

https://theecologist.org/2026/jan/06/age-fungi

Purportedly, this is even more scary than a pathogen you can catch from people, because you can catch th... (read more)

Showing 3 of 10 replies (Click to show all)
3
SiobhanBall
Hi Jenny, very interesting, thank you. What was the response of CG to your report, and do you know if they are planning to invest more resources towards this potential cause area? 

I'm not able to comment on CG's reaction to the report, as those discussions are confidential.

What I can say is that they are still exploring this area internally (given that they commissioned us to do more work related to fungal diseases recently (see here)).

I’m not aware of any specific grantmaking decisions or commitments at this stage.

1
SiobhanBall
I was wondering if anyone was going to mention that. There was a lot of media buzz about whether the events of the show could really happen at the time of its airing. This piece by Yale is supposed to sound reassuring, but it just... doesn't. :/ 
Linch
21
2
0
1

What are people's favorite arguments/articles/essays trying to lay out the simplest possible case for AI risk/danger?

Every single argument for AI danger/risk/safety I’ve seen seems to overcomplicate things. Either they have too many extraneous details, or they appeal to overly complex analogies, or they seem to spend much of their time responding to insider debates.

I might want to try my hand at writing the simplest possible argument that is still rigorous and clear, without being trapped by common pitfalls. To do that, I want to quickly survey the field so I can learn from the best existing work as well as avoid the mistakes they make.

1
Jordan Arel
Max Tegmark explains it best I think. Very clear and compelling and you don’t need any technical background to understand what he’s saying. I believe his third or maybe it was second appearance on Lex Fridman’s podcast where I first heard his strongest arguments, although those are quite long with extraneous content, here is a version that is just the arguments. His solutions are somewhat specific, but overall his explanation is very good I think:

Quick link-post highlighting Toner quoting Postrel’s dynamist rules + her commentary. I really like the dynamist rules as a part of the vision of the AGI future we should aim for:

“Postrel does describe five characteristics of ‘dynamist rules’:

As an overview, dynamist rules:

  1. Allow individuals (including groups of individuals) to act on their own knowledge.
  2. Apply to simple, generic units and allow them to combine in many different ways.
  3. Permit credible, understandable, enduring, and enforceable commitments.
  4. Protect criticism, competition, and feedback.
  5. Establish
... (read more)

At the NIH, Jay Bhattacharya did a lot to reduce animal experimentation and thus reduce animal suffering. As far as ChatGPT can tell, this seems to be completely ignored by the Effective Altruism forum.

Marty Makary's FDA is also taking it's steps to reduce the need of animal testing for FDA approvals.

Is this simply, because Effective Altruists don't like the Trump administration so they can't take the win of MAHA bringing contrarians into control of health policy that do things like caring more about reducing animal suffering and fighting the replication crisis?

I don't think so.

Some less tribalistic hypotheses I can think of:

  • EAs concerned about animal welfare have typically focused on farmed animals, as opposed to animal testing, because of the much larger scale of the suffering
  • EAs mostly haven't heard of it.
  • Maybe some EAs have heard about it, but they don't think it is worth the effort to write a post about it.

But tribalistic explanations could be a factor too (e.g. MAHA has anti-science vibes, and EAs like to stay on the pro-science side).

(This is probably not the most constructive feedback, but my initia... (read more)

I was a bit worried for the last 3 weeks that the Forum had gone quiet...

Then I come back after a 5 day Ugandan internet blackout and there are lots of fantastic front page posts great job everyone!!!

4
huw
Well the blackouts are the only way to ensure a free & fair election Nick :)

true man long live the King!

Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).

The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with ... (read more)

This is a solid opportunity for people who already live inside a domain and enjoy synthesis more than spotlight. The pay reflects the expectation of taste and context, not just surface level research. Helping shape guest selection and prep indirectly shapes the conversation, which matters given the reach of the podcast. For the right grad student or practitioner, this is leverage and learning at the same time.

2
Toby Tremlett🔹
+1 I would love an EA to be working on this. 
ASB
55
6
0

I’d be keen for great people to apply to the Deputy Director role ($180-210k/y, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and I’ve been working with the MBDF team for a while now and am impressed by what they’re getting done.

People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially ... (read more)

This role sounds important precisely because the risk is no longer theoretical but also not fully contained. Cutting risk through consensus helps, but it does not replace strong governance and clear red lines. A Deputy Director who understands both the technical details and the incentives of bad actors can close gaps that policy statements cannot. If mirror bacteria still sit close enough to misuse, staffing quality becomes a real safety control, not just an admin decision.

I notice the 'guiding principles' in the introductory essay on effectivealtruism.org have been changed. It used to list: prioritisation, impartial altruism, open truthseeking, and a collaborative spirit. It now lists: scope sensitivity, impartiality, scout mindset, and recognition of trade-offs.  

As far as I'm aware, this change wasn't signalled. I understand lots of work has been recently done to improve the messaging on effectivealtruism.org -- which is great! -- but it feels a bit weird for 'guiding principles' to have been changed without any disc... (read more)

Showing 3 of 10 replies (Click to show all)

Hi @Agnes Stenlund 🔸 ,

Last week I had a discussion about the core principles with someone at our EA office in Amsterdam. She also liked “collaborative spirit”. I remembered this discussion and decided to check it again and see that you decided to add this in the intro essay. That’s great! Shouldn’t it then also be added on the “core principles” page? (Or am I overlooking something?) 

6
James Herbert
Thanks for taking the time to provide this context! 
4
Lorenzo Buonanno🔸
Quick flag that the FAQ right below hasn't been updated   Not sure how useful this is, and you mentioned you can't speak for the choice of principles, but sharing on a personal note that the collaborative spirit value was one of the things I appreciated the most about EA when I first came across it. I think that infighting is a major reason why EA and many similar movements achieve far less than they could. I really like when EA is a place where people with very different beliefs who prioritise very different projects can collaborate productively, and I think it's a major reason for its success. It seems more unique/specific than acknodwledging tradeoffs, more important to have explicitly written as a core value to prevent the community from drifting away from it, and a great value proposition. As James, I also found it weird that what had become a canonical definition of EA was changed without a heads-up to its community. In any case, thank you so much for all your work, and I'm grateful that thanks to you it survives as a paragraph in the essay.
Load more