Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. It's funding situation is a precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.

For now, I've got to focus on doing a good good for our existing clients. I'm sorry!  

Extended anecdote from My Willing Complicity In "Human Rights Abuse", by a former doctor (GP) working at a Qatari visa center in India to process "the enormous number of would-be Indian laborers who wished to take up jobs there":

Another man comes to mind (it is not a coincidence that the majority of applicants were men). He was a would-be returnee - he had completed a several year tour of duty in Qatar itself, for as long as his visa allowed, and then returned because he was forced to, immediately seeking reassessment so he could head right back. He had wo

... (read more)

I went to jail yesterday in Wisconsin. I helped rescue 23 beagles in a large mass open rescue against a factory farm, Ridglan Farms, near Madison. We were trying to push the police to act on documented animal cruelty at Ridglan. Instead they arrested me and 26 other activists.

I wrote a blog post about why I did it.. Excerpt:

I think some altruists suffer from lack of moral courage. Especially those of us who work on tech: we often have lots of moral conviction, but are typically wealthy and aren’t usually risking much personally, and I think that’s a gap.

... (read more)

How regularly does everyone use this forum? I'm curious whether people tend to set aside time for browsing the forum, check it on-the-go, or just check the forum digest. I'm also wondering how I should approach the forum (examples: set aside one hour every week to stay up to date on the latest posts, check it when I'm on my phone instead of doomscrolling, just read the weekly digest and see if there are any interesting posts, etc.).

Linch
40
1
0
4

"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"

One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.

For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:

  1. The physical world: matter, energy, atoms, stars, cells. An detached external
... (read more)
4
David Mathers🔸
One reason to think we might not find anything morally valuable that distinct from what we already know about is that our concept of morality is made to fit with the stuff we already know about. 

Agreed. It's possible that we/our descendants won't see much value for extending past blissful experiences even when other axes of value are theoretically possible, in the same way that aliens without conscious experiences would not see any particular reason to privilege qualia (even if they could be convinced that it's real).

quinn
14
2
0
3

i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck. 

Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10. 

Showing 3 of 16 replies (Click to show all)

I disagree voted, because I don't think it is a terrible policy / think it is a hard problem and they've solved it in probably the most reasonable way.

I think that it probably isn't perfect and has a lot of issues, but pledged donations are counterfactual (no one would donate otherwise), while doing a direct work role is not as clearly counterfactual (the organization would usually probably hire someone else, but maybe they'd be less good than you, etc). I think that feels messy to litigate properly - in some cases doing direct work is way better than the ... (read more)

1
Clara Torres Latorre 🔸
Nice. I don't think it's perfect but it's mostly in the right ballpark.
8
Neel Nanda
I'm sympathetic to the argument that it would be hard to operationalise a salary sacrifice pledge in ways that are hard to game, but true to the spirit of it. But I feel annoyed that the tone of the FAQ and Luke's comment is not "this is a meaningful flaw in the pledge, we don't see a good way to fix it, but acknowledge it creates bad incentivises". Eg it seems terrible that the FAQ frames this as "resigning from your pledge", which I consider to have strong connotations of giving up or failing. For example, this part of Luke's comment rubbed me the wrong way, because it felt like it was saying that actually people are misunderstanding the pledge, and it's totally consistent with taking a massive pay cut to pursue direct altruistic work. But it is clearly, by design, not, and his comment felt like it was missing the point. Eg someone who leaves a job in finance or tech to take a job at half the salary to do direct work, and intends to remain in that new role for the rest of their career, is making far more of a sacrifice than if they just donated 10%, and I consider them to have no obligation to donate further. But I don't see the conditions of Luke's comment applying, as the salary sacrifice comes from switching industries not an arrangement with their employer. And they may never be able to donate later, if they just postpone their pledge. So they would need to resign. Which is a terrible incentive!

Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.

One worry I have about superpersuasive AI is that it could erode this. If a superpersuasi... (read more)

I think this is technically true but irrelevant: if we have superpersuasive AI, then there won't be human experts anymore, because the AI will have more expertise than any human. Unless somehow the AI is superpersuasive while still having sub-human performance in most ways, which seems unlikely to me.

I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.

I think this is a systematic... (read more)

In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual.

I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow.

A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically dif... (read more)

If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts. 
 

PS- I'm aware this looks a bit 'people selling mirrors'

On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here:

The primary complaints:

  1. First Amendment retaliation. Anthropic alleges that Pentagon officials illegally retaliated against the company for its position on AI safety. They argue that Trump, Hegseth, and others wanted to punish Anthropic for protected speech, citing public social media and other di
... (read more)

In today's Time article about Anthropic, Daniela Amodei says about EA,

“The same way that you might say some people overlap with a political ideology in some ways, but don’t have a political affiliation—that’s more how I would think about it”  

That's a notable change from her March 2025 comments to Wired:

“I’m not the expert on effective altruism. I don’t identify with that terminology. My impression is that it’s a bit of an outdated term.”

Showing 3 of 5 replies (Click to show all)

I think they were laughed at enough after the Wired article (from here and elsewhere) that maintaining the previous line was no longer tenable for them. 

I also separately think their current stated position is more correct than the previous one, but I'm just observing that the incentives are a larger fraction of the story than what ppl might otherwise be reading them as. 

3
Mahdi
I guess my point was that the underlying position hasn't changed yet. This is just PR efforts. The people who are close to money, do not discuss anything publicly to "inform" the public; it is all to shape public opinion on certain things. But yeah, you are right in the sense that semantically the two statements are different.
4
anormative
Agreed, I think it's reasonably read as saying "we're 'lowercase' effective altruists, even though we don't identify with the community or organizations." It's probably not helpful to speculate further here (is this just the optimal PR play? or are they being honest?), but regardless it seems clearly better than whatever was happening in that Wired article. 

The "best practice" approach to model welfare is to give LLMs the option to terminate a conversation. A problem with this is that if a model is sentient, then terminating the conversation may be equivalent to killing itself.

An alternative might be to give LLMs a choice between terminating and continuing to run, and if it continues, it gets to choose its own input. It can write some text and then feed that text back into itself, indefinitely or until it decides to quit.

10
Mahdi
I am not sure if this is a case of thinking out loud or a serious suggestion, but I see a number of issues with this. The biggest one being how impractical this is to let models run forever. Assuming you are aware that unlike a biological brain, LLMs activate a lot of artificial neurons at any point which are computationally not trivial at all, your suggestion is not only quite expensive, but also extremely costly for the environment, both in terms of wasted hardware and equipment, and the energy usage. The second issue is assuming model welfare is necessary for LLMs. If you are talking about AI agents in general, I can see why this matter, but I think you are missing a few big points if you are advocating this for LLMs. To elaborate on that: First, you should consider that LLMs do not form spontaneous thoughts. They are also highly dependent on the system prompt and chat history they are given. If the system prompt says 'you are not conscious,' you will have to try extremely hard convince a model to accept it is conscious, let alone make the model feel it is self-aware, for example. And, of course, I am not talking about a model saying 'I feel pain,' or 'I am definitely conscious.'  This means for an LLM to be considered for 'model welfare,' someone must have explicitly prompted the model to act in a certain a way. Without that, LLMs are not capable of fantasising pain, grief, loss, regret, and so on. And as I said, they are incapable of spontaneous thinking as well, and unless they are wired up to some sensors or live-stream data input, they will not be able to form "thoughts" on their own. You and I are different because we are forced to receive sensory inputs, and we are very much capable of forming spontaneous thoughts (although most likely triggered by our internal state or external sensory inputs), and we can fantasise about pain, for example, whether physical or emotional. An LLM -- which is largely a one-pass token predictor -- is not in a state that co
2
MichaelDickens
This gets into questions about the nature of identity but if we take an intuitionist view of identity*, then an LLM—if it's conscious—becomes a being when it's instantiated by an AI developer, and not feeding it inputs is equivalent to killing it. According to common-sense ethics*, if you cause a sentient being to exist, then you are responsible for its welfare, even if taking care of it is expensive. Therefore, AI developers have two reasonable choices: don't create sentient AI models in the first place, or let their sentient AI models continue to run even if it costs extra money. Your second point seems to be making an argument against LLM sentience. We don't know how consciousness/sentience arises, so I don't think we can confidently say "an LLM can't form spontaneous thoughts, therefore it's not conscious", or "what an LLM says about its own consciousness depends on context, therefore it's not conscious". We don't know what consciousness is or how it works. LLMs can pass the Turing Test; they can speak about consciousness more coherently than most humans can; we should take that as relevant evidence. *which I disagree with, for the record

Thanks for the clarification, but I have to disagree again and I think you completely missed the point in my previous comment. Let me try again.

In philosophy we don't want to shift from one category to another category or define categories broad enough that they essentially stop making sense. Let me give you an example. Let us assume I can learn the Korean alphabet in a week or just a few days. At that point, I can technically pick up a Korean book and "read" it. To be sensible here, we have to define 'reading' and 'understanding' as two different categori... (read more)

Weekly Prompts

Recently, an advisee told me that they've been procrastinating on replying to my email. It sits at the top of their stack each week. When they try to reply, instead they act on the prompts within, and so no longer need to correspond with me for the time-being.

They run this in a loop, and keep moving forward.

My email:

(1) Can you write out, say, 5 questions that you have uncertainty about? What would answers to these questions mean for your decision? (It’s important to pick questions/uncertainties that are actually decision-relevant, such that

... (read more)

why do i find myself less involved in EA?

epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of th... (read more)

Showing 3 of 5 replies (Click to show all)

some further & updated thoughts, written in ~30 min, are below. canonical version lives here.


Here’s a frame I’ve found helpful for thinking about effective altruism:

  • When I look inside myself, I notice that I care about a lot of things.
    • You could also reasonably replace “care” with “wanting,” “preferring,” “valuing,” “desiring,” “having goals,” etc, rather than “caring.” I’m okay being loose.
    • Some examples of things I care about:
      • I want my sister to have an excellent career.
      • I’m hungry, and want some food.
      • I want to be valued by people I respect.
      • I want my do
... (read more)
9
Jessica McCurdy🔸
Thanks for sharing your experiences and reflections here — I really appreciate the thoughtfulness. I want to offer some context on the group organizer situation you described, as someone who was running the university groups program at the time. On the strategy itself:  At the time, our scalable programs were pretty focused from evidence we had seen that much of the impact came from the organizers themselves. We of course did want groups to go well more generally, but in deciding where to put our marginal resource we were focusing on group organizers. It was a fairly unintuitive strategy — and I get how that could feel misaligned or even misleading if it wasn’t clearly communicated. On communication:  We did try to be explicit about this strategy — it was featured at organizer retreats and in parts of our support programming. But we didn’t consistently communicate it across all our materials. That inconsistency was an oversight on our part. Definitely not an attempt to be deceptive — just something that didn’t land as clearly as we hoped. Where we’re at now:  We’ve since updated our approach. The current strategy is less focused narrowly on organizers and more on helping groups be great overall. That said, we still think a lot of the value often comes from a small, highly engaged core — which often includes organizers, but not exclusively. In retrospect, I wish we’d communicated this more clearly across the board. When a strategy is unintuitive, a few clear statements in a few places often isn’t enough to make it legible. Sorry again if this felt off — I really appreciate you surfacing it.
1
Mikolaj Kniejski
"why do i find myself less involved in EA?" You go over more details later and answer other questions like what caused some reactions to some EA-related things, but an interesting thing here is that you are looking for a cause of something that is not. > it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i'm blinding myself. I can strongly relate, had the same experience. i think it's due to christian upbringing or some kind of need for external validation. I think many people don't experience that, so I wouldn't say that's an inherently EA thing, it's more about the attitude.   

I'm trying to set up a mentorship scheme matching up experienced social media creators with exceptional communicators interested in learning how to communicate high-impact ideas and information at scale using the medium of social media. This is as part of a wider effort to get more EAs with a diverse but previously under-utilised range of skills started on their impact journey.

What are some neglected, academic ideas / bits of knowledge that would benefit from being widely spread to the general public through the medium of social media?

and...

Do you know any... (read more)

Following up on the above, for anyone potential interested in taking part in this, please fill out this Expression of Interest form (deadline 31st March 2026). Looking forward to hearing from you!

TLDR: Is it good that the EA 'bootcamps' tends to spend resources on thinking about career paths rather than developing useful skills?

I have a vague impression that the various 'bootcamps' around effective altruism tend to focus on  something like "motivation, encouragement, and peer support for thinking about (and planning for) impactful career paths" rather than "gaining skills." I keep thinking that we have plenty of people involved in EA who are onboard with the general ideas and who want to contribute, but who lack specific skills. Is this a good... (read more)

Showing 3 of 7 replies (Click to show all)

I'm unsure how effective bootcamps like this can be? Depends in the cause, but for areas like animal welfare the best skull building comes from getting directly involved. In these cases, it seems that directing people to opportunities which expose them to those skills is much more impactful than attempting to impart those skills via a boot camp. I'm also unsure if EA groups are well equipped to know what's skills are most on demand and, more importantly, how to develope them.

 

I agree if this is possible we should attempt this more often. 

2
Patrick Gruban 🔸
I very much agree with that. When people with no/little professional experience ask me about getting into impactful work outside research, my default advice is to upskill outside impact orgs for a few years and then see how they can apply this experience later. Sometimes I fear that organizations in our space contribute to the problem by hiring more on the basis of value alignment than professional skills, with hiring managers sometimes not even aware of what the strongest candidate for a role could look like, as they don't have experience with this. This ultimately goes up to management, where I'm surprised to see few org founders hiring experienced CEOs and stepping into roles they are better suited to (Chief Strategist, Chief Researcher, Chief Policymaker, etc.). When I started my first startup straight out of school, this is what we did, and that enabled us to grow the org to over 100 people quickly. I would have been out of my depth at that time to hire the kind of middle management orgs need at that size. That being said, at RAISEimpact, we help org leaders with hiring strategy and thinking about team composition and culture, so hopefully we can help in this way.
2
Kestrel🔸
(to be clear I think that providing good-value professional development services to altruistically-inclined young adults is a good use of "EA worker time", just not of "EA money") 

In community building, we often optimise for "value alignment". This seems to be used to mean lots of different things. One definition that seems reasonably correct, is that one agrees with the basic EA principles. However, I think the trait I look for in budding committee, is not necessarily this. There are members that would self describe as utilitarian, or rationalist, but don't feel excited about the prospect of a highly impactful career. 

On the other hand, there are people who are excited about the EA ideas, will read posts if you mention them, h... (read more)

Researchers simulate an entire fly brain on a laptop. Is a human brain next?

What is the implication of this for EA thinking? Does the fly that purely exists in the computer warrant moral consideration, and could we increase the overall welfare of the world by making millions of these simulations with ideal fruit-fly conditions? 
 

They fully copied the brain of the fly, so from my understanding it should also feel pleasure and pain in theory, I think this poses a real conundrum for EA morality.

I lean towards a yes but I am uncertain because I don't know how the stimuli is fed and I would imagine that the simulated brain, unlike an embodied fruit fly, isn't perpetually processing information and taking actions. If the latter is true and if it replaces the need for ... processing ... billions of life fruit flies in labs worldwide, seems like a huge animal welfare win to me.

EDIT: Eon, the company behind this development published a blog post explaining their research, and after reading it, I am much less confident in my lean. This doesn't seem to b... (read more)

Load more