I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. I'm a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare.
My interests outside of EA, in hieroglyphs: 🎸🧙🏼♂️🌐💪🏼🎉📚👾🎮✍🏼🛹
To the extent that harshness is an EA norm, I think it's inherited from rationalist culture. In my experience with spaces like LessWrong, quite jarring critiques are fairly normal even for trivial things (e.g. “that argument is stupid”). There, bluntness is viewed as efficiency, getting bad ideas off the table faster.
EA spaces are optimized for a different goal, and tone matters for that goal. We need people to feel welcomed, encouraged, and inspired to contribute; not like they’re auditioning for a role in a debate team. A good measure of how well we're doing on this is the fear people have of posting on the forum.
I haven’t read titotal’s post, so I won’t comment on that case, but I’ve definitely noticed the broader pattern Alfredo is pointing out. And I think we should be intentional about whether it serves the kind of community we want to build.
I'm interested in hearing more about the cases you found for and against EA ideas/arguments applying without utilitarianism. I personally am very much consequentialist but not necessarily fully utilitarian, so curious both for myself and as a community builder. I'm not a philosopher so my footing is probably much less certain than yours.
The first quote you mention sounds more like a dog whistle to me. I actually think it's great if we can "weaponize capitalist engines" against the world's most pressing problems. But if you hate capitalism, it sounds insidious.
The rest I agree is uncharitable. Like, surely you wouldn't come out the shallow pond feeling moral guily, you'd be extatic that you just saved a child! To me, Singer's thought experiment always implied I should feel the same way about donations.
EA’s goal is impact, not growth for its own sake. Because cost-effectiveness can vary by 100x or more, shifting one person’s career from a typical path to a highly impactful one is equivalent to adding a hundred contributors. I agree with the EA stance that the former is often more feasible.
This doesn’t fully address why we maintain a soft tone outwardly, but it does imply we could afford to be a bit less soft inwardly. I predict that SMA will surpass EA in numbers, while EA will be ahead of SMA in terms of impact.
As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same.
Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.
Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements.
Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
I'm sad to hear that you'd feel manipulated by my reply to the QALY-doubting response, but I'm very happy and thankful to get the feedback! We do want to show that EA has some useful tools and conclusions, while also being honest and open about what's still being worked on. I'll take this to heart.
I feel the need to clarify that none of these responses are meant to be "sales-y" or to trick people into joining a movement that doesn't align with their values. My reply was more based on the ideas that we need more skeptics. If they have epistemic (as opposed to ethical) objections, I think it's particularly important to signal that they're invited. My condolences for having gotten such awful advice from whatever organization it was, but that's not how we do things at EA Lund.
For a more realistic example, I talked to one person who said that they'd focus significantly on homelessness in their own city as well as homelessness in Rwanda, because it's unfair to not divide the resources. They're not doing the most good, because they find it more ethical to divide their resources.
So I think your professor's description is good, but I'm not sure it helps discuss egalitarianism/prioritarianism with laymen in their terms. When I say I'd give everything to Rwanda, I'm answering "what does the most good?" and not "what's the most fair/just?" Nonetheless I'll consider raising this response next time the objection comes up.
Epistemic status: I'm community builder with a technical background and surface level understanding of alignment techniques from BlueDot.
This post is well-written and the core takeaway is important. I’d add one caveat: starting from weak priors should increase our urgency to seek out evidence, not delay action. Once there's reasonable uncertainty that there's something morally salient there, I worry we’ll collectively shrug, defaulting to “just a tool” or retreating behind epistemic modesty. We can't let epistemic caution turn into neglect.
One concrete intervention is Forethought’s proposal that future LLMs be able to end conversations they're uncomfortable with. I find this a plausible and robust way to fulfill potential preferences. We need more proposals like that.
On another note, please consider your use of adjectives.