Recently, a growing group of EAs, EA-adjacents, post-EAs and EA-curious folk have been gathering and organising around a new term - integral altruism (int/a). The central claims of int/a are that the EA toolkit is powerful but incomplete, and that EA can learn from other movements who are trying to improve the world.
The goal of int/a is to find a broader approach to altruism by integrating EA with epistemics/ontologies/world models/language/culture from outside of the EA/rationalist bubble. More specifically, we reckon EA can be most constructively complemented by learning from movements, communities and thinkers who emphasise wisdom. Accordingly, our intellectual lineage is a combination of EA/rationalism and the liminal/metamodern/metacrisis world.
We’ve run a whole bunch of events of various flavours and have plans for more. These included residential retreats, reading/discussion groups (e.g.), a speaker series (e.g.), deliberative technology experiments (e.g.), workshops (e.g.), hackathons (e.g.), and more. In the future we’d love to try running more ambitious projects like conferences, incubators, or fellowships. We’re currently looking for funding.
What is this for?
There are many of us who want to improve the world but feel that EA in its current form is unable to sustainably support us to cultivate and practice altruism. Some common reasons for this are
- An exclusive focus on a narrow range of cause areas (e.g. the coalescing around AI safety), in tension with radical uncertainty & cluelessness,[1]
- A lack of trust in epistemic tools outside of formal rationality (like intuition, metarationality, ecological rationality, Verveke’s 4Ps, or The Heart™),
- An action bias that may be leading to negative effects (like exacerbating the AI race or the whole FTX drama),
- An underemphasis in systems change as a cause area,
- A culture that can result in unsustainable personal sacrifices leading to burnout,
- A shadow side of the movement (e.g. status-seeking, power-seeking, guilt-escaping) that may be messing with collective epistemics.
A common theme linking all of these issues is a desire for more wisdom. By wisdom we mean[2] recognising the limits of our own knowledge, awareness of context, perspective-taking, and careful consideration of how to balance or integrate different viewpoints and interests.
A core motivation for more wisdom is recognising that we are radically uncertain[3] about the problems we’re trying to solve and the more broad question of how to do good. Such radical uncertainty calls for a more careful, robust, and flexible portfolio of frames/tools/approaches.[4]
In the EA context, wisdom caches out as seeing EA as not the one-and-only but one of a number of frameworks for changemaking that can be integrated to produce something more robust. This means letting go of the search for a scientific “view from nowhere” on how to solve the problem of altruism, and being aware of the cultural conditioning that lies behind any framework for doing good. It also means making space for other values besides “maximize impact”.[5]
int/a’s goal is to create a new network, culture, and formal(ish) framework that supports those of us who take radical uncertainty seriously and desire this broader view on altruism to become the best versions of ourselves and do our part in bringing about a flourishing future.
The (tentative) integral altruism principles
At the first int/a summit in summer 2025, we ran a workshop aimed at attempting to more clearly define integral altruism. The result was a set of principles we would like to embody and guide us in doing good.
Thanks to Aaron Halpern, Ben R. Smith, Brayden Beckius, Christine Tan, Elisa Paka, Finn Clancy, Gamithra Marga, Georgie Nightingall, Jon Hall, Katie Calvert, Luke Fortmann, Matilda du Rui, Patrick Gruban, Plex, Tildy Stokes and Toby Jolly for their Very Sensible And Quite Profound contributions.
We intend the presentation here to be descriptive rather than convincing - arguing for the merits of these principles is beyond the scope of this post (we may publish arguments in a future post!).
The principles are not final; we expect our understanding of this space to evolve over time. The principles are also currently somewhat abstract, in the future we hope to translate these to be more concrete & action-guiding. With those caveats out of the way, here is what we came up with.
1. Full-Spectrum Knowing
We want to integrate EA’s rigorous, grounded, rational epistemics with other valuable ways of knowing like embodied intuition, ecological rationality, or Verveke’s 4Ps.
This comes from recognising the limits of formal rationality in taking effective action in the real world, and seeing that reason & evidence is not sufficient for attuning to what is most important. It means taking other forms of knowing seriously but also knowing when to use them.[6] It means listening to all parts of ourselves, resulting in action that is internally aligned and authentic.
In practice, this could mean
- Experimenting with including the 4Ps in discussions,
- Augmenting decision-making with meditative (e.g. mindfulness), contemplative (e.g. journaling), embodiment (e.g. focusing), relational (e.g. collective insight), or dialogical (e.g. socratic questioning) practices.
- Applying integration practices (like IFS or core transformation) to our altruistic goals.
There’s a thread you follow. It goes among
things that change. But it doesn’t change.
People wonder about what you are pursuing.
You have to explain about the thread.
But it is hard for others to see.
While you hold it you can’t get lost.
Tragedies happen; people get hurt
or die; and you suffer and get old.
Nothing you do can stop time’s unfolding.
You don’t ever let go of the thread.
(William Stafford)
2. Moving at the Speed of Wisdom
We want to integrate EA’s action-oriented energy with discernment of when to take high-impact actions and when to wait until the next graceful move reveals itself.
In other words, this means integrating the yin and the yang. Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.
In practice, this could mean
- Generally seeking stakeholder input before taking high-impact actions,
- Avoiding unnecessarily power-seeking moves on a both personal (e.g. climbing to the top of orgs) and collective (e.g. founding AI labs and racing to the front) level,
- Emphasising process-orientation over goal-orientation.
You thought, as a boy, that a mage is one who can do anything.
So I thought, once. So did we all.
And the truth is that as a man’s real power grows and his
knowledge widens, ever the way he can follow grows narrower:
until at last he chooses nothing,
but does only and wholly what he must do…
(Ursula K. Le Guin)
3. Decoupling & Recoupling
We want to embrace EA’s analytical & decoupling approach of isolating the most important problems while also attending to the larger system & our place within it.
Different cause areas and x-risks are highly interconnected. While decoupling problems from their context can be useful for making progress, it can also make us blind to this entanglement. We want to adopt both decoupled frames and contextualizing frames (like the metacrisis).[7]
This also means seeing our place within the system: Maintaining awareness of the assumptions underpinning the cultural paradigm we are operating in (e.g. capitalism, colonialism, techno-solutionism, or victim/oppressor narratives).
In practice, this could mean
- Using tools from systems thinking & complexity science,
- Taking systems change & cultural change seriously as cause areas,
- Creating cross-cultural fellowships in epistemically distant communities.
There is no such thing as a single-issue struggle
because we do not live single-issue lives.
(Audre Lorde)
4. Practicing Fractal Altruism
We want to balance EA’s scope-sensitive ambition to work towards the largest positive impact with intrinsic values at the local scale like friendship, love, beauty, family and the sacred.
This means being good to ourselves and the people around us as well as the rest of the world. It doesn’t mean forgetting about impact, but rather finding ways to cooperatively integrate scope-sensitive altruism with other ends in one’s life by imaginatively searching for win-wins between these ends.
In practice, this could mean
- An empathetic approach to career paths that takes into account not only effectiveness but how that work can enhance one’s own life,
- Running events that simultaneously nourish individuals, cultivate deep connections, and lead to impact at scale,
- Explicitly & honestly examining which tradeoffs between personal, community and global goods we are willing to make.
Start close in,
don’t take the second step
or the third,
start with the first
thing
close in,
the step
you don’t want to take.
(David Whyte)
5. Inner Work, Outer Change
We want to integrate EA’s culture of supporting one’s intellectual, productive and career growth, while also supporting psychological growth as a foundation for impact.
Psychological, emotional and spiritual development can help us cultivate a genuine desire for the wellbeing of others, resulting in altruism grounded in truth rather than being driven by guilt or pride. Such growth can also improve our epistemics by shining light on What’s Going On For Us and inspire action by deeply connecting us to the value we’re fighting for.
In practice, this could look like
- Using practices like metta meditation to cultivate our altruistic drive,
- Collective shadow work on the topic of altruism (like this event),
- Tracking our growth using frameworks like the Inner Development Goals.
I slept and dreamt that life was joy.
I awoke and saw that life was service.
I acted and behold, service was joy.
(Rabindranath Tagore)
Putting this all together, integral altruism is a community for those who want to help improve the world in a way that is effective, wise, and sustainable - by integrating reason with embodiment, agency with patience, decoupling with contextualizing, impartial values with local values, and inner work with outer change.
What’s happening?
We’re experimenting with a number of flavours of events in order to cultivate the community and create a space for the int/a framework to develop. Our main physical hub is London, with nascent communities springing up in Berlin and Paris.
Some experiments we’ve run so far are
- Residential retreats (/“summits”) where the most engaged members can grow as altruists, deepen our community and make progress on the int/a framework,
- Reading/discussion groups (on topics like Moloch, critiques of Bayesianism, and ecological rationality),
- Deliberative technology experiments (like antidebates and collective insight),
- An online speaker series (on topics like metacrisis & AI governance, the history of post-rationality, and cultural evolution),
- Wisdom development circles & work integration circles,
- Writing & synthesis hackathons (e.g.),
- Cause X scanning,
- Funky new relational practices,
- Tabletop roleplaying exercises.
And we have a big list of other ideas (like conferences and intro courses) we’d like to put into motion.
The conceptual development of the int/a framework is slowly happening but is still in its early days. We recently ran a frameworks hackathon, you can check out some of the ideas that came out of that here.
We have a core of engaged people running the show: six “core stewards” (currently Christine Tan, Patrick Gruban, Tildy Stokes, Toby Jolly, Finn Clancy, Euan McLean). We’ve implemented some governance structure that we’re slowly testing and evolving.
Our intended relationship with effective altruism
We’d love int/a to have a symbiotic relationship with EA. We reckon int/a’s goals are win-win with EA’s goals in a number of ways:
- Being a place to help those who have drifted away from EA to reconnect with their altruistic nature and put that into practice once more,[8]
- Generating constructive dialogue with the EA philosophy, and red-team EA as a movement,
- Creating a bridge between EA and other movements, resulting in useful knowledge exchange, especially introducing more wisdom to EA.
That being said, we’re also aware of the danger of potential zero-sum dynamics between int/a and EA, and would like to avoid them as much as possible. One thing we are afraid of is int/a gravitating towards the “just bitching about EA” attractor state, which is definitely not the vibe we’re going for. Another concern is “taking people away from EA”. We don’t intend to dissuade people from doing impactful work by EA lights, in fact many of us in the movement are doing incredibly canonical EA jobs.
Wanna get involved?
If you’re intrigued or excited about this general direction, you can register your interest for getting involved here, keep an eye out for future events, and subscribe to our substack.
We’re also currently looking for funding since we’re severely funding constrained. If you'd like to support int/a to grow, or know someone who might, you can find our funding page here.
Thanks to Chris Pang, Christine Tan, Elisa Paka, Gamithra Marga, Georgie Nightinghall, Guillaume Corlouer, Hunter Muir, Jack Kock, Jonah Wilberg, Patrick Gruban, Simon Haberfellner, and Toby Jolly for feedback on early drafts.
- Which can lead to those who are drawn to other cause areas becoming alienated from the community. ↩︎
- Wisdom is a highly nebulous concept and is used in a number of different ways. To gain some precision, we used a definition of wisdom above based on the work of Igor Grossman, one of the leading wisdom scientists. Grossman identified a central component of wisdom to be persectival metacognition - which caches out as the definition we give here. ↩︎
- In the book Radical Uncertainty, Mervyn King & John Kay define radical uncertainty to be when a situation is unresolvable by further research in principle, where we cannot enumerate the range of possible options or futures, and where previously inconceivable events can emerge. ↩︎
- See Jonah Wilberg’s excellent article for more on how radical uncertainty calls for wisdom. ↩︎
- According to Logan Strohl’s model of EA burnout: “EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.” By creating a space that supports the integration of different values, int/a can support people to sustainably engage with EA work. ↩︎
- For example, we don’t want this to be an excuse to throw all science & rationality away in favour of just going with our emotions - we want to understand the strengths of both and know what kinds of questions call for one over the other. ↩︎
- While “zoomed out” frames can lead to compromise on tractability, we would like to make explicit the tradeoff between tractability and better epistemics (via seeing more of the system) rather than just automatically attending to the decoupled frame. ↩︎
- See footnote 5. ↩︎
