We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. It's funding situation is a precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.
For now, I've got to focus on doing a good good for our existing clients. I'm sorry!
Extended anecdote from My Willing Complicity In "Human Rights Abuse", by a former doctor (GP) working at a Qatari visa center in India to process "the enormous number of would-be Indian laborers who wished to take up jobs there":
...Another man comes to mind (it is not a coincidence that the majority of applicants were men). He was a would-be returnee - he had completed a several year tour of duty in Qatar itself, for as long as his visa allowed, and then returned because he was forced to, immediately seeking reassessment so he could head right back. He had wo
I went to jail yesterday in Wisconsin. I helped rescue 23 beagles in a large mass open rescue against a factory farm, Ridglan Farms, near Madison. We were trying to push the police to act on documented animal cruelty at Ridglan. Instead they arrested me and 26 other activists.
I wrote a blog post about why I did it.. Excerpt:
...I think some altruists suffer from lack of moral courage. Especially those of us who work on tech: we often have lots of moral conviction, but are typically wealthy and aren’t usually risking much personally, and I think that’s a gap.
How regularly does everyone use this forum? I'm curious whether people tend to set aside time for browsing the forum, check it on-the-go, or just check the forum digest. I'm also wondering how I should approach the forum (examples: set aside one hour every week to stay up to date on the latest posts, check it when I'm on my phone instead of doomscrolling, just read the weekly digest and see if there are any interesting posts, etc.).
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"
One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.
For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:
Agreed. It's possible that we/our descendants won't see much value for extending past blissful experiences even when other axes of value are theoretically possible, in the same way that aliens without conscious experiences would not see any particular reason to privilege qualia (even if they could be convinced that it's real).
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.
Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10.
I disagree voted, because I don't think it is a terrible policy / think it is a hard problem and they've solved it in probably the most reasonable way.
I think that it probably isn't perfect and has a lot of issues, but pledged donations are counterfactual (no one would donate otherwise), while doing a direct work role is not as clearly counterfactual (the organization would usually probably hire someone else, but maybe they'd be less good than you, etc). I think that feels messy to litigate properly - in some cases doing direct work is way better than the ...
Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.
One worry I have about superpersuasive AI is that it could erode this. If a superpersuasi...
I think this is technically true but irrelevant: if we have superpersuasive AI, then there won't be human experts anymore, because the AI will have more expertise than any human. Unless somehow the AI is superpersuasive while still having sub-human performance in most ways, which seems unlikely to me.
I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.
I think this is a systematic...
In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual.
I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow.
A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically dif...
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.
PS- I'm aware this looks a bit 'people selling mirrors'
On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here:
The primary complaints:
In today's Time article about Anthropic, Daniela Amodei says about EA,
“The same way that you might say some people overlap with a political ideology in some ways, but don’t have a political affiliation—that’s more how I would think about it”
That's a notable change from her March 2025 comments to Wired:
“I’m not the expert on effective altruism. I don’t identify with that terminology. My impression is that it’s a bit of an outdated term.”
I think they were laughed at enough after the Wired article (from here and elsewhere) that maintaining the previous line was no longer tenable for them.
I also separately think their current stated position is more correct than the previous one, but I'm just observing that the incentives are a larger fraction of the story than what ppl might otherwise be reading them as.
I'm writing a newsletter on current events, long-term trends, and topical debates roughly every other day. Recent posts include:
The "best practice" approach to model welfare is to give LLMs the option to terminate a conversation. A problem with this is that if a model is sentient, then terminating the conversation may be equivalent to killing itself.
An alternative might be to give LLMs a choice between terminating and continuing to run, and if it continues, it gets to choose its own input. It can write some text and then feed that text back into itself, indefinitely or until it decides to quit.
Thanks for the clarification, but I have to disagree again and I think you completely missed the point in my previous comment. Let me try again.
In philosophy we don't want to shift from one category to another category or define categories broad enough that they essentially stop making sense. Let me give you an example. Let us assume I can learn the Korean alphabet in a week or just a few days. At that point, I can technically pick up a Korean book and "read" it. To be sensible here, we have to define 'reading' and 'understanding' as two different categori...
Recently, an advisee told me that they've been procrastinating on replying to my email. It sits at the top of their stack each week. When they try to reply, instead they act on the prompts within, and so no longer need to correspond with me for the time-being.
They run this in a loop, and keep moving forward.
My email:
...(1) Can you write out, say, 5 questions that you have uncertainty about? What would answers to these questions mean for your decision? (It’s important to pick questions/uncertainties that are actually decision-relevant, such that
epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of th...
some further & updated thoughts, written in ~30 min, are below. canonical version lives here.
Here’s a frame I’ve found helpful for thinking about effective altruism:
I'm trying to set up a mentorship scheme matching up experienced social media creators with exceptional communicators interested in learning how to communicate high-impact ideas and information at scale using the medium of social media. This is as part of a wider effort to get more EAs with a diverse but previously under-utilised range of skills started on their impact journey.
What are some neglected, academic ideas / bits of knowledge that would benefit from being widely spread to the general public through the medium of social media?
and...
Do you know any...
Following up on the above, for anyone potential interested in taking part in this, please fill out this Expression of Interest form (deadline 31st March 2026). Looking forward to hearing from you!
TLDR: Is it good that the EA 'bootcamps' tends to spend resources on thinking about career paths rather than developing useful skills?
I have a vague impression that the various 'bootcamps' around effective altruism tend to focus on something like "motivation, encouragement, and peer support for thinking about (and planning for) impactful career paths" rather than "gaining skills." I keep thinking that we have plenty of people involved in EA who are onboard with the general ideas and who want to contribute, but who lack specific skills. Is this a good...
I'm unsure how effective bootcamps like this can be? Depends in the cause, but for areas like animal welfare the best skull building comes from getting directly involved. In these cases, it seems that directing people to opportunities which expose them to those skills is much more impactful than attempting to impart those skills via a boot camp. I'm also unsure if EA groups are well equipped to know what's skills are most on demand and, more importantly, how to develope them.
I agree if this is possible we should attempt this more often.
In community building, we often optimise for "value alignment". This seems to be used to mean lots of different things. One definition that seems reasonably correct, is that one agrees with the basic EA principles. However, I think the trait I look for in budding committee, is not necessarily this. There are members that would self describe as utilitarian, or rationalist, but don't feel excited about the prospect of a highly impactful career.
On the other hand, there are people who are excited about the EA ideas, will read posts if you mention them, h...
Researchers simulate an entire fly brain on a laptop. Is a human brain next?
What is the implication of this for EA thinking? Does the fly that purely exists in the computer warrant moral consideration, and could we increase the overall welfare of the world by making millions of these simulations with ideal fruit-fly conditions?
They fully copied the brain of the fly, so from my understanding it should also feel pleasure and pain in theory, I think this poses a real conundrum for EA morality.
I lean towards a yes but I am uncertain because I don't know how the stimuli is fed and I would imagine that the simulated brain, unlike an embodied fruit fly, isn't perpetually processing information and taking actions. If the latter is true and if it replaces the need for ... processing ... billions of life fruit flies in labs worldwide, seems like a huge animal welfare win to me.
EDIT: Eon, the company behind this development published a blog post explaining their research, and after reading it, I am much less confident in my lean. This doesn't seem to b...