Effective Altruists should consider religions as possible conduits for effective altruism, and religious faith as a means of increasing their own rationality. [1] 

To this end, I propose an experiment among (non-religious) effective altruists, where effective altruists are randomly selected to adopt the tenets of different faiths for a significant period of time (90 days to a year). The analytically ideal experiment would randomly select effective altruists to adopt different faiths, leaving some non-religious effective altruists as a control group. In fact, in the analytically ideal experiment, even religious effective altruists could be randomly selected to change their faith or adopt non-faith. Clearly, this has ethical implications that almost certainly eliminate this as an option. 

At the risk of introducing selection bias, an ethical experiment could ask for non-religious effective altruists to volunteer to be randomly selected into different religious faiths or no religious faith at all. There would have to be the willingness to select into beliefs that volunteers would find incorrect or even abhorrent at first. Ideally, an effective altruist randomly selected into the faith would do their best to sincerely practice the faith and adopt its tents for 90 days or more.

Different religions would then be measured in their responsiveness to effective altruism when it is discussed by their newfound believers. To what extent does the religion find the idea compatible with their beliefs? How does this vary by the gender, race, or other characteristics of the newfound believer? To what extent do religious faiths try to redirect altruism efforts into social insurance for their own believers? To what extent does the religion try to redirect EA towards expanding the religion itself or to pet political causes? To what extent do religious faiths then incorporate EA beliefs into their own charitable works? 

Potential religions include: Catholicism, Sunni Islam, Progressive Christian (and/or Quaker), Shi'ite Islam, Eastern Orthodox, Orthodox Judaism, Latter Day Saints, Conservative Protestant, Buddhism, Reform Judaism, Hinduism, Liberal Protestant, and others, with no religion left as a control. The number of Effective Altruists willing to volunteer clearly determines the number of religions in the treatment.

There are some reasons that this experiment may prove to be revolutionary, and will certainly prove highly informative. 

First, I introduce my main assumption, which also serves as my idealized, stylized understanding of effective altruism: Effective Altruism means that one does the best one can to make as much income in an ethically responsible way, one lives far below one's means, and one donates to causes that offer the greatest clear (likely measurable) benefit to humanity.

I am not part of effective altruism, but I find this idea very useful as a standard of very good behavior. It is useful partly because I fall short of it and I can see how I fall short. I admire this exacting standard, and I admire the standard precisely for its disregard of my desire to feel good about myself. It doesn't coddle me or let me off the hook. In short, insensitivity is a virtue in this context.  Effective Altruism is of course not intended to cause discomfort, it is in fact very moral in my view, but honest statements of the highest moral behavior will cause discomfort for those who wish to feel morally excellent without actually being morally excellent, and in my view that covers a lot of people. In short, the very high quality of EA's core idea may limit EA's appeal for many people.

For this reason, attempts to broaden the appeal of effective altruism carry the downside risk of trying to attract people who want to dilute the challenging idea of EA in order to feel more comfortable or to have EA comport with their more social orientation. This may express itself as a belief in giving money to the right people instead of a focus on helping humanity in clear, often measurable ways. 

Consequently, I reframe the issue: Rather then modifying EA to broaden appeal, instead find the people and cultures who can appreciate EA as-is.

There are reasons to believe that some religious cultures may be better able to appreciate the virtues of moral truths that express standards in ways not designed to assuage people's feelings. Indeed, expressing truths in ways that do not assuage people's feelings is likely the only way to express many important truths.

First, and I believe a literature review is not required here, there is a myriad of research in cognitive psychology that shows that a large part of human irrationality can be summed up as an instinct towards irrational collectivism.[2]  In addition, recent research shows that so-called autism spectrum disorder is also an expression of advanced rationality. This indicates that most people, in the natural state, are simply unable to absorb certain truths expressed by those with enhanced rationality.

Second, religions are myths with multiple complementary parts that form cultures that have proven adaptive over time. We must consider, then, that religions may have proven adaptive because they found ways for neurotypical people and perhaps even psychotics to better appreciate more socially insensitive people with enhanced rationality, and perhaps also aided socially insensitive people with advanced rationality to get along with neurotypicals and psychotics, and helped people with enhanced rationality to assert themselves when needed. 

Of course, religions also exist to create group cohesion, conquest, and social insurance, so religions may vary in the way they handle cognitive diversity. Certainly some religions could enforce irrational collectivism to a greater degree, while other religions may correct the excesses of irrational collectivism. That is one question this experiment can help answer. 

For this reason, selecting effective altruists to randomly join different religions and see how these religions grapple with the ideas of effective altruism would certainly prove to be deeply informative, and may represent a new, productive social vector for the spread of effective altruism. 

Thank you all very much for your time.

 

 

 

 

  1. ^

    I confess to assuming here that effective altruists are largely not religious.

  2. ^

    One might even call this original sin, metaphorically speaking.

Comments4


Sorted by Click to highlight new comments since:
  1. I think a randomised trial of the effects of religion on 'EA virtue-ness' is pretty cool and fun idea, so thanks for that.
    1. Overall though I think I mostly disagree with how good of an idea it'd be to actually implement the study.  
  2. I don't think I think the value of the co-existance of EA and formalised religions are the same as you do, and I think the link between religion and rationality in this post is an odd one.
  3. Although I don't personally think that your admittedly stylised version of EA is the same thing that I think of as being 'EA', I think the virtuousness aspect of 'living as an EA' is something that still resonates with me, and ties into my intrigue and belief in the usefulness of the EA for Christians, EA for Jews, and EA for Muslims projects that are going on within the community.

In fact, I very much think that this: 
Rather [than] modifying EA to broaden appeal, instead find the people and cultures who can appreciate EA as-is.

Is how I think about path to impact for community building in the EA for Muslims Project. But again, I see these more as projects to expand the reach and adoption of EA ideas, and not so much as a way of enhancing the existing EA community by 'altering' the members who are already part of it. 

First, thank you for enjoying the proposal. In addition, I simply did not know that there already were religious communities for EA, and I find that interesting. 

I think the proposal still has potential in terms of identifying communities that do not self-select into EA but where EA could take hold if socialized.

Will a period of 90 days to 1 year be sufficient for getting enough understanding about the religion in question? Many religions claim that one has to spend many years to become knowledgeable.

Your point is fair. I think the time that people could commit to a controlled experiment would be a limiting factor.

I think it would be interesting if people used the post as a jumping off point to do informal experimentation, and they could take the time they thought it would require. This would not necessarily yield the same precision that a randomized trial would, but as per your point, there may be a tradeoff between external validity (taking the time to really practice the faith) and internal validity (limits on time commitment in the context of a randomized trial to gain causal inference).

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f