I've seen AI-based animal communication technologies starting to be involved in some EA events / discussions (e.g. https://www.earthspecies.org/ ). I'm worried these initiatives may be actively negative, and I'm wondering if anyone has / will articulate a stronger defense of why they're good?
The high-level argument I've heard is that communicating with animals will make humans be more empathetic towards them. But I don't see why this would be the most likely outcome:
Someone should write a good, linkable online resource describing the concept of the long reflection. It's very strange that there isn't a simple post/webpage that I can link to that gives a good, medium-depth description.
Currently the best things are probably the EA Forum Topic page, and this list of quotes.
There's now also the related concept of viatopia, which is maybe a better concept/term. Not sure what the very best links on that are but this one seems a good starting point.
Here's my current four-point argument for AI risk/danger from misaligned AIs.
I think that your list is really great! As a person who try to understand misaligned AI better, this is my arguments:
I’m pro-nuclear, but the commonly used EA framing of “nuclear is overregulated” seems net negative more often than not. Clearer Thinking’s new nuclear episode is one of the more epistemically rigorous discussions I’ve heard in EA-adjacent spaces (and Founders Pledge has also done nuanced work).
Nuclear is worth pursuing, but we should argue for it clear-eyed.
Ah, now I see - thanks for clarifying. Yes historically I do not know how much each set-back to nuclear mattered. I can see that e.g. constantly changing regulation, for example during builds (which I think Isabelle actually mentioned) could cause a significant hurdle for continuing build-out. Here I would defer to other experts like you and Isabelle.
Porting this over to "we might over regulate AI too", I am realizing it is actually unclear to me whether people who use the "nuclear is over regulated" example means the literal same "historical" thing could ...
U.S. Politics should be a main focus of US EAs right now. In the past year alone, every major EA cause area has been greatly hurt or bottlenecked by Trump. $40 billion in global health and international development funds was lost when USAID shut down, which some researchers project could lead to 14 million more deaths by 2030. Trump has signed an Executive Order that aims to block states from creating their own AI regulations, and has allowed our most powerful chips to be exported to China. Trump has withdrawn funding from, and U.S. support for, internatio...
More good news! Norwegian meat industry announced that they will stop using fast-growing chicken breeds by the end of 2027. These breeds are source of immense suffering due to the toll such rapid growth takes on animal's body.
This will be the first country to stop using them.
More here: https://animainternational.org/blog/norway-ends-fast-growing-chickens
If you:
You can use this HMRC link to tell HMRC how much you’ve donated excluding Gift Aid and claim back the difference. More details in this evergreen post.
Practical tip
I set my Giving What We Can pledge tracking to run from 6 April to 5 April, wh...
The European Parliament recently submitted a parliamentary question on wild animal welfare! The question focuses on human caused wild animal suffering and such questions generally don't have policy implications - but still, was surprised to see this topic being taken up in policy discourse.
https://www.europarl.europa.eu/doceo/document/E-10-2025-004965_EN.html
Are there any signs of governments beginning to do serious planning for the need for Universal Basic Income (UBI) or negative income tax...it feels like there's a real lack of urgency/rigour in policy engagement within government circles. The concept has obviously had its high-level advocates a la Altman but it still feels incredibly distant as any form of reality.
Meanwhile the impact is being seen in job markets right now - in the UK graduate job opening have plummeted in the last 12 months. People I know are having a hard enough time finding jobs w...
Thanks for your take - I always appreciate slightly less doom and gloom perspectives.
On your point that there's not an imminent unemployment crisis and what impacts we are seeing may be due to other factors. Firstly I think it's inevitable that the direct causes of disruption to the labour market are going to be multifaceted given the current trajectory of global markets (de-coupling, de-globalisation etc.)whatever happens moving forward. In the UK specifically part of the issue is minimum wage has been increased, making employers less inclined to hire gra...
Technical Alignment Research Accelerator (TARA) applications close today!
Last chance to apply to join the 14-week, remotely taught, in-person run program (based on the ARENA curriculum) designed to accelerate APAC talent towards meaningful technical AI safety research.
TARA is built for you to learn around full-time work or study by attending meetings in your home city on Saturdays and doing independent study throughout the week. Finish the program with a project to add to your portfolio, key technical AI safety skills, and connections across APAC.
See this ...
Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:
Changing the simulation hypothesis from a simulation of a world full of people to a simulation of an individual throws the simulation argument out the window. Here is how Sean Carroll articulates the first three steps of the simulation argument:
I made this simple high-level diagram of critical longtermist "root factors", "ultimate scenarios", and "ultimate outcomes", focusing on the impact of AI during the TAI transition.
This involved some adjustments to standard longtermist language.
"Accident Risk" -> "AI Takeover
"Misuse Risk" -> "Human-Caused Catastrophe"
"Systemic Risk" -> This is spit up into a few modules, focusing on "Long-term Lock-in", which I assume is the main threat.
You can read interact with it here, where there are (AI-generated) descriptions and pages for t...
Good points!
>Would love to see something like this for charity ranking (if it isn't already somewhere on the site).
I could definitely see this being done in the future.
>Don't you need a philosophy axioms layer between outputs and outcomes?
I'm nervous that this can get overwhelming quickly. I like the idea of starting with things that are clearly decision-relevant to the certain audience the website has, then expanding from there. Am open to ideas on better / more scalable approaches!
>"governance" being a subcomponent when it's arguably...
According to someone I chatted to at a party (not normally the optimal way to identify top new cause areas!) fungi might be a worrying new source of pandemics because of climate change.
Apparently this is because thermal barriers prevented fungi from infecting humans, but because fungi are adapting to higher temperatures, they are now better able to overcome those barriers. This article has a bit more on this:
https://theecologist.org/2026/jan/06/age-fungi
Purportedly, this is even more scary than a pathogen you can catch from people, because you can catch th...
I'm not able to comment on CG's reaction to the report, as those discussions are confidential.
What I can say is that they are still exploring this area internally (given that they commissioned us to do more work related to fungal diseases recently (see here)).
I’m not aware of any specific grantmaking decisions or commitments at this stage.
What are people's favorite arguments/articles/essays trying to lay out the simplest possible case for AI risk/danger?
Every single argument for AI danger/risk/safety I’ve seen seems to overcomplicate things. Either they have too many extraneous details, or they appeal to overly complex analogies, or they seem to spend much of their time responding to insider debates.
I might want to try my hand at writing the simplest possible argument that is still rigorous and clear, without being trapped by common pitfalls. To do that, I want to quickly survey the field so I can learn from the best existing work as well as avoid the mistakes they make.
Quick link-post highlighting Toner quoting Postrel’s dynamist rules + her commentary. I really like the dynamist rules as a part of the vision of the AGI future we should aim for:
“Postrel does describe five characteristics of ‘dynamist rules’:
...As an overview, dynamist rules:
- Allow individuals (including groups of individuals) to act on their own knowledge.
- Apply to simple, generic units and allow them to combine in many different ways.
- Permit credible, understandable, enduring, and enforceable commitments.
- Protect criticism, competition, and feedback.
- Establish
At the NIH, Jay Bhattacharya did a lot to reduce animal experimentation and thus reduce animal suffering. As far as ChatGPT can tell, this seems to be completely ignored by the Effective Altruism forum.
Marty Makary's FDA is also taking it's steps to reduce the need of animal testing for FDA approvals.
Is this simply, because Effective Altruists don't like the Trump administration so they can't take the win of MAHA bringing contrarians into control of health policy that do things like caring more about reducing animal suffering and fighting the replication crisis?
I don't think so.
Some less tribalistic hypotheses I can think of:
But tribalistic explanations could be a factor too (e.g. MAHA has anti-science vibes, and EAs like to stay on the pro-science side).
(This is probably not the most constructive feedback, but my initia...
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with ...
This is a solid opportunity for people who already live inside a domain and enjoy synthesis more than spotlight. The pay reflects the expectation of taste and context, not just surface level research. Helping shape guest selection and prep indirectly shapes the conversation, which matters given the reach of the podcast. For the right grad student or practitioner, this is leverage and learning at the same time.
I’d be keen for great people to apply to the Deputy Director role ($180-210k/y, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and I’ve been working with the MBDF team for a while now and am impressed by what they’re getting done.
People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially ...
This role sounds important precisely because the risk is no longer theoretical but also not fully contained. Cutting risk through consensus helps, but it does not replace strong governance and clear red lines. A Deputy Director who understands both the technical details and the incentives of bad actors can close gaps that policy statements cannot. If mirror bacteria still sit close enough to misuse, staffing quality becomes a real safety control, not just an admin decision.
I notice the 'guiding principles' in the introductory essay on effectivealtruism.org have been changed. It used to list: prioritisation, impartial altruism, open truthseeking, and a collaborative spirit. It now lists: scope sensitivity, impartiality, scout mindset, and recognition of trade-offs.
As far as I'm aware, this change wasn't signalled. I understand lots of work has been recently done to improve the messaging on effectivealtruism.org -- which is great! -- but it feels a bit weird for 'guiding principles' to have been changed without any disc...
Hi @Agnes Stenlund 🔸 ,
Last week I had a discussion about the core principles with someone at our EA office in Amsterdam. She also liked “collaborative spirit”. I remembered this discussion and decided to check it again and see that you decided to add this in the intro essay. That’s great! Shouldn’t it then also be added on the “core principles” page? (Or am I overlooking something?)