Effective Altruism Paradigm vs Systems Change Paradigm (post 2/3)

by oliverbramford21st Oct 20174 comments

1

Frontpage

Systems change and Effective Altruism each have their own body of knowledge, tools and ways of working. Both fields are aligned in purpose, but operate in different paradigms:

Effective Altruism

“I maximize my personal impact”

“How can I do the most good?”

“The main ways I can maximize my personal impact are through my career, voluntary work and philanthropic donations.”

“Logical, rational, critical thought is a sure path to better decisions and greater impact”

“Impact is unknown unless it’s quantified.”

“We should allocate resources to the causes that are statistically most likely to have the greatest positive impact.”

 

Systems Change

“We maximize our collective impact”

“How can we create the world we want?"

“The main ways I can maximize my impact are by thinking and acting systemically: being an example of what I want to create, and collaborating with other pioneers to carve out a new collective reality.”

“Linear thinking is often inappropriate, and regularly leads to short-sighted decisions and ineffective behavior”

“Systems change impact is often very long term and arguably impossible to attribute to a specific individual, group or project.”

“We need to prioritise systems change causes where the system in question is unsustainable or excessively unjust.”

 

Marginal vs Total Impact
Effective Altruism has tended to focus on how to maximise marginal personal impact, whereas a systems change approach tends to focus on total collective impact.

An Effective Altruist, by focusing on impact at the margin, may ask questions such as:

  • What impact will my next $100 donation make in this charity vs that charity?

  • What impact will I have in this career vs that career?

This line of thinking, by focussing on immediate tangible alternatives, lends itself to logical analysis and rigorous empirical evaluation; focus on marginal impact encourages linear thinking.

 

Systems change tends to focus on total collective impact. Systems change work involves:

  • Collective diagnosis of systemic failings

  • Collective exploration to identify opportunities for change

  • Collective visioning of how a new system could be

  • Collaborative creativity to bring these visions into reality, and take them to scale

This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.

 

In my experience, a systems change approach tends to sense its way to radical innovation, whereas an EA approach tends to think its way there. Both approaches can lead to great breakthroughs, and I suspect could be deeply enriched by each other.

 

This is post 2 of 3.
Post 1: "Why to Optimize Earth"
Post 3: "5 Types of Systems Change Causes with the Potential for Exceptionally High Impact"




Thanks for valuable suggestions & feedback from: Ray Taylor, Ulrik Horn, Kyle Bogosian, Samuel Hilton, Dony Christie & Alex Dickinson.

4 comments, sorted by Highlighting new comments since Today at 5:39 PM
New Comment

I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.

An Effective Altruist, by focusing on impact at the margin, may ask questions such as: What impact will my next $100 donation make in this charity vs that charity?

It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they're donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: "What if you were in a position to give away billions of dollars to improve the world? What would you do with it?"

This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.

This is pretty mystical.

while taking into account externalities (as EAs do)

I think that the current EA methodology to take into account impact externalities is incomplete. I am not aware of any way to reliably quantify flow-through effects, or to quantify how a particular cause area indirectly affects the impact of other cause areas.

The concept of total impact, if somehow integrated into our cause prioritisation methodology, may help us to account for impact externalities more accurately. I concede that total impact may be too simplistic a concept...

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.


It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues.

I do not mean to say that EA focuses on small issues and systems change focuses on big issues. Rather, I see EA as having a robust (but incomplete) cause prioritisation methodology, and systems change having a methodology that accounts well for complexity (but neglects cause prioritisation in the context of the system of Earth as a whole).


This is pretty mystical.

On reflection, I think that conducting systems change projects in appropriate phases, with clear expectations for each phase, is a viable way to synthesis EA and systems change approaches and culture. Specifically, a substantial research phase would typically be required to understand the system before one can know what interventions to prioritise.

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.

I'd be interested in seeing this. Do you have anything written up?

Parts 3 and 5 of the article linked below explain this approach is more detail, although my thinking has moved on a bit since writing this.

There's a good chance that these ideas will be refined and written up collaborative in an applied context as part of GeM Labs' Understanding and Optimising Policy project over the next year. If they are out of scope of this project, I intend to develop them independently and share my progress.

https://docs.google.com/document/d/1DFZ9OAb0g5dtQuZHbAfngwACQkgSpjqrpWWOeMrsq7o/edit?usp=sharing