Hide table of contents
A surreal yet grounded illustration depicting the balance of Yin and Yang in altruism. The image features two hands, one glowing with warm, intuitive energy (Yin) and the other structured, analytical, and geometric (Yang), clasping together over a globe. The background is an abstract blend of organic textures and conceptual metaphors, evoking introspection. Muted, earthy tones with deep blues and warm neutrals dominate the palette, creating a dreamlike yet conceptual atmosphere. The composition emphasizes the tension between rationality and mysticism, symbolizing the integration of different worldviews.

Some rich people have worldviews that are uncommon within Effective Altruism. These people might be on board with doing good, but less aligned with the pragmatic, calculated approach common in Effective Altruist circles.

Last year, I joined a group of “strange stakeholders” investing in a co-created tantric retreat centre. The other investors' worldviews ranged from business-minded altruism to spiritually idealistic tantra.

By thoughtfully presenting Effective Altruist ideas in ways that bridged our diverse worldviews, I gained the support of the other stakeholders to allocate part of our future surplus to effective global health charities.

The project is just getting started. Initially, most surplus revenue will cover loan repayments, but after a couple of years, more is projected to go toward effective charity. At that point, the estimated annual amount donated is expected to range from $225,000-900,000—roughly 50-200 lives per year.

In this post, I'll share how I bridged the worldview gap, starting with some theoretical background and ending with a concrete example.

 

Theory - Speak In Their Values

Bridging worldview gaps is challenging—especially if you rely on 'ingroup rhetoric'. Research from climate advocacy suggests that aligning messages with your audience's existing values is an effective way to build common ground, even across significant value differences.

Communicating across value gaps requires translation—expressing ideas in ways that resonate with your audience. Using familiar talking points from your own circles often misfires—missing the mark entirely or coming across as a challenge to your audience's values

Questioning people’s deep-held beliefs is rarely a good foundation for collaboration and reaching common ground—yet it’s the natural thing to do for most people.

This is unfortunate.

Fortunately, it’s possible to do better, bridging value gaps to establish collaborations. Let’s look at a concrete example: how to present EA ideas in a way that resonates with people drawn to holistic tantra.
 

Example - EA And Holistic Tantra

Effective Altruism typically appeals to people who strongly value pragmatism, analytical clarity, and measurable outcomes. Although these traits exist in everyone to varying degrees, they're especially pronounced in EA circles. To connect meaningfully with this group, I needed to present Effective Altruism differently—taking care to emphasise how it also aligns with emotional resonance, holistic perspectives, and intuitive or spiritual approaches.

When approaching my fellow stakeholders—individuals whose commitments ranged from moderate appreciation to deep immersion in holistic well-being, emotional openness, and spiritual values—I knew a different approach was necessary. Rather than pulling them toward EA's pragmatic core, I chose to meet them where they already were.

I grounded my explanation in values that resonated deeply with them:

  • Diversity - embracing differences
  • Symmetry - complementary forces working together
  • Holistic view - expanding the scope of care
  • Compassion - an active desire to do good.

My presentation drew on two key building blocks: Yin/Yang and Maslow’s hierarchy of needs.
 

Yin/Yang, and the Maslow Hierarchy

The first building block is “Yin and Yang”—Yin representing the intuitive, emotional, and nurturing aspects, and Yang embodying structure, pragmatism, direction, and action. I highlighted how our retreat centre integrates both Yin and Yang energies, creating a balanced environment where people can flourish and embrace more aspects of themselves.

Next, I integrated Maslow’s Hierarchy of Needs into my explanation. I pointed out that the retreat centre we’re investing in primarily operates at the upper end of this hierarchy—focusing on self-actualization, spiritual growth, holistic healing, and emotional balance. I praised this work for fostering mental health, community connection, and alignment with nature.

Then, I emphasized that alongside this high-level spiritual and emotional nourishment, we also had an opportunity—and perhaps even a responsibility—to support those at the lower end of Maslow’s pyramid: individuals whose fundamental needs for health, food, and shelter are unmet.
 

Introducing EA as Yang Energy

Here’s where Effective Altruism entered the picture:

I described conventional charity approaches as predominantly "Yin"—driven by compassion, emotional resonance, and spontaneous heart-centered action. These approaches are valuable and necessary, yet they often lack clear direction and measurable impact.

In contrast, Effective Altruism represents a strong infusion of "Yang" into altruistic action—structured, strategic, pragmatic, and relentlessly data-driven, with a clear emphasis on measurable outcomes.

Rather than replacing the intuitive and compassionate approaches common in most charities, EA complements them, providing essential balance.

By allocating a portion of the surplus from our intuitive, holistic operations toward global health charities aligned with EA principles, we could embody this holistic balance—not only within our spiritual practices but also in the broader impact of our philanthropy.
 

Handling Objections

Naturally, questions arose:

"Couldn't we just travel to developing countries and dig wells ourselves?"

Here, I respectfully pointed out that effective charity work, much like running a retreat centre, requires expertise. Our strengths—balanced intuition, emotional depth, and holistic well-being—make us particularly well-suited to enriching the upper levels of Maslow's hierarchy, where wholeness and self-actualization often remain unfulfilled.

However, addressing basic needs at the bottom of the pyramid frequently demands a different, more Yang approach—pragmatic efficiency rather than holistic intuitiveness. This role is well served by Effective Altruists, acting like the retreat centre’s complimentary opposite—pragmatic efficiency rather than holistic wellbeing.

“Are you sure we can trust these people?”

When scepticism emerged about how trustworthy EA organizations are, I shared openly about EA’s culture of extreme transparency, critical self-assessment, and continuous refinement. EA groups rigorously evaluate their initiatives, publicly sharing analyses, inviting constant scrutiny and critique.

I highlighted how EAs focus on basic physical needs makes it easier to apply a pragmatic, logical approach—a fair distance from the culture among some of the stakeholders, yet a good fit for its task.

“Can you link to the places where these ideas are discussed?”

I happily agreed but gently noted that a cultural gap might be evident. Effective Altruists might come across as nerdy or overly calculating—this reflects their strongly Yang-oriented approach, emphasizing analytical rigour and structured reasoning. I suggested viewing Effective Altruists as well-intentioned people who happen to be nerdy, business-minded, and highly analytical.

“Aren’t there any downsides?”

I openly acknowledged two areas where Effective Altruism might diverge from conventional moral intuition. First, Effective Altruists sometimes explore unusual concerns, like whether atoms might suffer.1

Second, there have been isolated instances of people committing financial crimes to raise additional charity funding.

I explained these issues can be easily avoided by simply not committing financial crimes and by choosing effective charities that focus on clearly beneficial causes, such as reducing child mortality.

At this point, we paused for a group check-in. Everyone was in unanimous agreement—it was a success.
 

Bridging Worlds

By presenting Effective Altruism in a way that aligned with the stakeholder’s values—holistic balance, emotional resonance, and spiritual harmony—I built a genuine bridge, getting the stakeholders to pledge a portion of the future surplus. By meeting them where they were, EA’s analytical rigour felt like a complement rather than a contradiction.

Ultimately, our impact grows significantly when we, as effective altruists, learn to bridge the worldview gaps separating us from potential collaborators.

I hope this inspires you—perhaps you also straddle a worldview chasm. How would you anchor Effective Altruist action in the values of your other worldview?


 

Comments6


Sorted by Click to highlight new comments since:

PS: If you can share, I'd be curious to hear where this new retreat center will be located and when it will open. I imagine quite a few readers here might be curious about Tantra as well. Is there a way to sign up to be notified when you guys launch? 

No worries if you cannot or don't want to share more yet. 

Sweden, it's a long-term centre called "Skeppsudden". The past owner died without a proper will, now there's a team (including his son) looking to buy the property from the estate, and continue the business-with a stronger focus on cocreated communities (a la burning man) and, of course, charitable giving.

Well done, congrats to you for unlocking so much additional funding for effective charities and to the other stakeholders for being open to new ideas. 

For anyone else trying to bridge world views, I'd like to add that it might be easier to pitch an EA-adjacent project first. GiveWell, GWWC or Founders Pledge (for entrepreneurs) is easier to pitch than EA, and for non-English speakers effective charity fundraisers such as Effektiv Spenden (German) might be easier to pitch.

Tantra and yin/yang are definitly not things I expected to read about on the EA Forum today, but bravo for managing to adapt the ideas and build bridges across cultural differences. This is a lovely example of tailoring communication to the intended audience. I think that a lot of us interested in and involved in effective altruism could learn from this.

Thank you for writing this ! I've been trying to find a good example of "translating between philosophical traditions" for some time, one that is both epistemically correct and well executed. This one is really good !

What I keep from this is the idea of making additional distinctions -aknowledging that EA (or whichever cause area one wants to defend) really is different from the initial "style", but being able to explain this difference with a shared vocabulary.

Executive summary: By aligning Effective Altruist ideas with the values of spiritually-inclined co-investors in a tantric retreat centre, the author secured a pledge to donate future profits—potentially saving 50–200 lives annually—demonstrating the power of value-based framing to bridge worldview gaps for effective giving.

Key points:

  1. The author invested in a tantric retreat centre with stakeholders holding diverse, spiritually-oriented worldviews, initially misaligned with Effective Altruism (EA).
  2. To bridge the gap, the author framed EA as a "Yang" complement to the retreat's "Yin" values, emphasizing structured impact alongside holistic compassion.
  3. Tools like Yin/Yang and Maslow’s hierarchy were used to communicate how EA complements spiritual and emotional well-being by addressing urgent global health needs.
  4. Stakeholder concerns were addressed through respectful dialogue, highlighting EA’s transparency, expertise, and balance with intuitive charity.
  5. As a result, stakeholders unanimously agreed to allocate future surplus (estimated at $225,000–900,000/year) to effective global health charities.
  6. The post encourages EAs to build bridges by translating ideas into value systems of potential collaborators, rather than relying on EA-specific rhetoric.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read