For me, two of the core elements of effective altruism are:

  1. Do the most good
  2. Have good reasons for believing it is the most good

This was fairly straightforward to strive for when I was comparing the cost effectiveness of charities.  This got harder when I decided to found a new not-for-profit to work on challenges not yet fully explored. As I continue to see more and more of the interconnected complex world, these two elements feel more and more at odds with each other.

Building a spreadsheet works to compare charities, but breaks with circular reference errors if you try to model a complex system.

~~~~~~~~~~~~~~~~~~~~~

Dave Snowden has developed the Cynefin (ke-nev-in) framework (article explainer, 5-min video), which offers a lens on how we might make sense of different systems:

Creating a cost-benefit breakdown of various interventions might be a complicated task - we can see the inputs and outputs of a given charity, make some expected-value assumptions, and figure out what to do.  Designing a new intervention to transform the education system in a country is more complex - there are so many variables that can greatly affect the impact of any individual project, it quickly becomes combinatorially explosive to assess how each variable interacts, and how each relationship is affected by other contextual factors over time.

In my time in the rationality and EA communities, I learned a lot about working on problems through a complicated lens, and built an effective analytical toolkit.  But in my pursuit to do the most good, I found this toolkit was not always up to the task.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In which quadrant might we expect to be able to ‘do the most good’?  I don’t believe the Cynefin framework is a good tool to answer this question.  How we experience qualities of ‘complex’ vs ‘complicated’ depend on our own ability to understand systems and processes. A young child may [subjectively] experience much of the world as largely chaotic, but an engineer may experience the world as more complicated.

Let’s instead explore Michael Commons’ Model of Hierarchical Complexity.  Paraphrased from Wikipedia:

The model of hierarchical complexity (MHC) is a formal theory and a mathematical psychology framework for scoring how complex a behavior is.  It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. Its forerunner was the general stage model.

Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the hierarchical complexity of task accomplishment in any domain. It is based on the very simple notions that higher order task actions:

  1. are defined in terms of the next lower ones (creating hierarchy);
  2. organize the next lower actions;
  3. organize lower actions in a non-arbitrary way (differentiating them from simple chains of behavior).

Without getting into the mathematical specifics, I’ll illustrate with a simple example of someone learning language, from less complex to more complex:

  • A child learns words - she ties each word to an object and can communicate simply
  • The child starts to combine words into sentences - the order of the words is important to the meaning being communicated
  • Sentences are strung into paragraphs - Multiple sentences communicating something more rich than can be shared in just one sentence
  • In school, she learns to put paragraphs into stories or essays, and again - the order of the paragraphs matters.  Each paragraph is organized by (and contributes to) the story or essay
  • In university, she starts to conduct literature reviews, identifying themes between multiple essays, and journals - seeing how each contributes to a greater paradigm of thought
  • Early in her career, she encounters paradigms that seem to contradict each other, creating a tension she feels until she starts to see a broader system that includes both paradigms
  • Later, she starts to integrate multiple paradigms into a new field.  This field is able to make sense of multiple ways of seeing, and gives new purpose to the work being done across several disciplines, universities, and thousands of people

We could use this model to assess the [more objective] complexity of various goals/tasks involved in ‘doing the most good’.  We could also use this model to assess our own ability to understand and accomplish these goals.

In example, we might categorize a hierarchy of goals in the education field [from more complex to less]:

  1. Shift humanity from a competitive/rivalrous platform to one based off of cooperation (A very complex goal, requiring the coordination of economic systems shift and cultural narrative shift.)
  2. In service to the more complex goal, we may work on a sub-goal of shifting cultural narrative through re-imaging the role of education in raising our next generation. This may involve coordinating change in parenting, media, and formal education
  3. Looking at just the formal education space, how might we coordinate a movement of change through-out various areas of the educational system?  A network may be appropriate (who do we invite, and how do they work together?)
  4. Spotting a gap in our network, we might fund a new not-for-profit company to tackle an important challenge.  The creation of this NFP is given purpose because it is organized from a higher level of systemic understanding
  5. What goals do we assign the individuals working together in this NFP?  Each must be coordinated to achieve the goals of the NFP
  6. How do I personally schedule out my week?  Each of these components are in service to my support of this NFP, and in turn, the network, the movement, and even the broader shift in humanity

Which of these levels is ‘doing the most good’?  They are all necessary, though not everyone is suited to managing each level.  I believe it is in our collective ability to navigate these nested systems that much of our opportunity lies.

  • There is lost value in the person who feels their work is meaningless and boring - because they don’t see the bigger picture of how their work contributes
  • There is missed opportunity in the local charity that did groundbreaking work in one community, but never shared what they learned with the hundreds of other charities attempting similar things
  • It would be a shame if huge swaths of humanity were spending their hours on things that contributed to our own extinction, unable to connect the dots between their individual actions and the larger systemic effects

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I have a sense that EA as a movement often tends towards the lower levels of complexity - finding great opportunities in complicated spaces, where our ability to predict outcomes gives us many good reasons to follow the paths we choose.  At higher levels of complexity, we have less ‘evidence’ to believe that any of our work will directly contribute to a better world, yet these spaces are necessary for effective coordination in service of higher-leverage goals.

There is currently an EA Systems Change group on Facebook, with ~900 members.  The activity there is sporadic, far from something that could be seen as coordinated activity.  What might be possible if this group started to coordinate? The groups that offer grant money to EA orgs - what level of system do they hold as they distribute their funding?  In which ways might we improve our collective ability to create a whole that is greater than the sum of its parts?

I suspect this is an important growth edge for our community; to enter into dialogue with the network thinking groups, the complexity wizards, and the systems change communities.  I’ve encountered many of them in my own quest to do the most good: the person mapping networks to create collective awareness, the person who connects two community leaders for a dialogue over brunch, the person who asks a key question that shifts the mission of a charity…  These people exist and operate from a different playbook, doing good in ways that are largely invisible, yet very high leverage.

One of the most beautiful things I see in the EA community is its curiosity and willingness to learn and grow.  I'm hoping this piece may act as a nudge towards vertical learning, building off the rich horizontal learning that the community does so well.  Would love to hear what you make of this, and I’m happy to connect with anyone interested in further conversations around complexity, leverage, and systems change!

33

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 11:33 PM

+1 - I also see this as an area deserving of investigation.

Thanks for the article!

It seems right to me that strategic coordination and better communication among various EA nodes is essential. I'd love to see more thinking and action on the "improving movement coordination" like you are demonstrating, so thank you. We could host some collaborative events with complexity folks at EA Toronto maybe? Though I imagine knowing something about what higher complexity strategic thinking is currently being done is a wiser first step. What does Systems Change in EA (on FB) think? The CEA def has some thoughts: https://www.centreforeffectivealtruism.org/strategy/ - or is your sense that the thoughts are still lower on the abstraction ladder than they could be?

I also think it is appropriate that many bits of the movement are only vaguely connected through the highest  order question of EA - "how do we do the most good, act on that, and learn from our actions" - yet free to propose their own subquestions and seek answers. Since much of the meta framing itself is an open question, there feels to  me to be leverage in more bottom-up processes too - a sort of "everyone get some basic skills, now go do whatever you think is best, then we'll regroup and reflect, then go out and try again" kind of approach. And of course, not everyone can or wants to be part of more involved/meta reflections. Yet overall I feel you raise a dang good question to bring up regularly. 

I'd speculate the CEA's focus on reducing risky actions in interactions with individual EA commmunities is in part a reflection of a more bottom-up orientation. If you want a group to experientially explore without anchoring them too much with your framework, you can give them tools and then step back until/unless they come to an answer you have good reason to believe is not the right one.

I particularly appreciate your thought that people can not share information or not feel their work is meaningful if not aware of a broader community strategy though. I wonder how often this happens and how... Maybe a future EA Survey could ask "have you worked on a project which you believe is helpful but would not classify as EA-adjacent". 

Awesome article Naryan!

Some additional Complexity / Systems Thinking Resources:

Thanks for this wonderful article! I absolutely agree that it would be highly beneficial to have a community that is at the intersection of EA and Complexity. I recently participated in an event, where I actually found several other EAs interested in Complexity but unfortunately I couldn't spend enough time to network with them further (I got involved in another project there).

I have also been thinking about how we may use the tools of Complexity to make EA better although I haven't been able to concretely land on anything. Here are some vague thoughts I have. I am not entirely sure if any of these thoughts are worth pursuing so tug at these threads at your own peril!:

  1. I wonder if there is a possibility of creating an Agent-Based Model to understand Global Catastrophic Risks although I am not entirely sure how to go about doing this. This talk by Luisa Rodriguez here might be a good place to start. She is not building an ABM (atleast going by what she said in that talk) but the way she talks about it made me feel like an ABM could help.
  2. Complexity has some roots in Philosophy (A quick Google search took me here). I wonder how the philosophy of EA and that of Complexity would work together.
  3. I wonder if we can deal with flow-through effects better if we had a Complex Systems view. Is this a network shaped problem?

But these are all mostly at a 'wondering-if' stage and one would definitely need help from cleverer people to actually start some concrete work. So having a community around EA & Complexity would be highly beneficial.

Doesn't complexity have its "roots" in reality? as one aspect of phenomenal world? of actuality and factual experience? rather than growing up out of a set of conceptualized abstractions?

I refer, of course, to Varella, Matura et al ... "self-organization" and such. Autopoesis, nae?
And, of course, Mandelbrot ... the fractal nature of reality ...

#Lateral - Came across this in my bookmarks: https://ccc.ciencias.uchile.cl/ccc/index.php

/bdt


 

Thank you for sharing your thoughts Naryan. I agree with your main point that EA approaches are mostly limited to the complicated domain of the Cynefin framework. I have felt frustration with EA often focusing on complicated solutions that are easier to quantify and implement, rather than considering the complexity of the issue and taking a more preventative approach and intervening at a system level. And I think you’re right that complexity is needed “effective coordination in service of higher-leverage goals.”

However, I think both the Cynefin framework and the MHC will be useful in achieving this aim. 

The difference between these two approaches is that the Cynefin framework is a model that helps to classify systems and respond to systems, while MHC focuses on the cognitive ability to understand complexity. Obviously, people’s level on the MHC will impact their ability to accurately determine what kind of system they’re dealing with. 

Hence, MHC is a theory to describe and classify people’s differing ability to understand and conceptualise complexity. It is not an action orientated framework like doesn’t detail what to do about complexity, like the Cynefin model does. I think this is the issue you run into in your post when the education example, you ask “Which of these levels is ‘doing the most good’?” 

Cynefin can help you to understand more clearly the area you’re trying to operate within.  To see how the Cynefin framework can be applied, it might be useful to take a look at the EU Fieldbook Managing Complexity (and Chaos) in Times of Crisis

Regarding your point about that “not everyone is suited to managing each level”, within the Cynefin approach people are asked ‘what can you change at your level?’ so that they take action at the level at which they operate. For example, an employee taking action within their own team, someone taking action within their local community.  

Sounds like what you’re suggesting is that we can do the most good by helping people to see things at a higher level of complexity than they might be currently disposed to. Is that correct? This is a key aim outlined in Hanzi Freinacht’s The Listening Society.

Also, I believe Dave Snowden has critiqued MHC for being information/algorithmic-centric (I guess that’s why it’s attractive to the EA scene with it’s strong emphasis on measurability at the expense of other approaches) and using complexity in the dictionary definition which does not take complex adaptive systems (CAS) into account. But I can’t find much other info on this, can anyone help?

Great post, and you have arrived at the beginning. Yes, in most cases, or perhaps all cases, in the developing world, the most effective thing one might do is to effect system change. Even in the First World, one might see a far greater return on efforts and investment if one could find and understand basic principles of human group psychology and how complex systems like those we live in function.

That said, the danger of causing unintended harm is obvious. The last 114 years have seen a number of well meaning efforts to change the system in various nations cause vast amounts of harm and the deaths of perhaps a hundred million people all told, combining the fascist and communist utopian efforts. 

Additionally, for all the talk and intention to be open minded and willing to change our views in EA, it is far less psychologically costly to change our minds on peripheral issues like mosquito nets that we likely had no opinion on before anyway than it is to change our core political beliefs which define who we are to ourselves and our friends. There will be a lot more resistance to system change in EA than there is to anything else, and if system change gains any traction it is basically a given that the movement will fracture because of it. Especially because a real grasp of system change is an entirely new political paradigm and neither left nor right nor centrist.  

Yes, we need to understand complex systems, but we need to be specific here; there are many kinds of complex systems. What we need to understand are complex systems with multiple poorly related selfish agents and nested overlapping subgroups, and to do that we need to understand how selfish agents compete with each other in complex systems, and how and why the system benefits from that competition. 

Saying that selfish are selfish seems to be no insight, but for decades evolutionary theorists wondered why selfish failed to overrun and replace all the altruists in a group of altruists, because they forgot this. Groups of altruists are host bodies for selfish, they are fitness limiting resources for those pursuing the strategy of selfish defection, and selfish, being selfish, do not want to share the group of altruists with other selfish. In fact, too many selfish in the group causes group collapse, as seen in many hundreds of communes, and there actually is no such thing as a group of selfish humans (though there can be sub-groups of selfish): that is a herd or a flock. 

A selfish actor who finds themselves alone in a group of altruists will maximally exploit them. However, when multiple selfish actors are in a group, they compete with each other, attempt to remove each other from the group, attempt to win the group over to their side against the others (for example, one such process is what we refer to as "politics"), and attempt to thwart the ability of the other to selfishly exploit the group in a process that has been labelled "selfish punishment." The result is that the selfish strategy is self-limiting; selfish do not overrun groups of altruists because selfish are selfish and compete with each other. 

Understanding selfish competition, we should expect that more competition between selfish will be beneficial for the group, and less competition between selfish will be harmful for the group. When we look at other complex systems with multiple poorly related selfish agents, this is what we see over and over. Gut microbiota with more diversity and more competition result in healthier humans who live longer on average.  Ecosystems with competition within niches are more stable and fertile and robust. Economies with more diversity and more competition within industries are more stable and grow faster. Political systems with more competition for power are more stable and provide more benefits for the society. This is why Democracy is beneficial, this is why the division of powers is beneficial, this is why anti-trust laws are beneficial. On the other side, a lack of sufficient competition is dependably harmful. Monopolies and cartels are harmful, dictatorships are harmful, systems that combine political and economic power are harmful, monocultures are harmful, invasive species are harmful, economies dominated by one industry or by one resource such as oil are harmful (and usually not Democracies, competition between economic and political forces is extremely important), and this is why. 

A property of complex  systems with multiple poorly related selfish agents then is that more competition is beneficial to the system or group, and too little competition is reliably harmful. Based on this knowledge we can see why Communism and Fascism and Monarchies and dictatorships and Anarchist and Libertarian ideas should be dismissed from the political debate; they will always result in harm because they reduce or eliminate competition in either political or economic spheres, or both. We can also see that efforts to increase economic diversity and competition in developing nations, for example, could result in long term system change in the most positive way, as diverse economic interests seeking to protect themselves from political power is how Democracy arises. 

This then could be the strongest and most beneficial lever to grasp for: it is hard to envision the developing nation that would object to efforts aimed at helping to create a more thriving economy, and long term that thriving economy will tend strongly to create the fertile ground for Democracy. One way we can do this is to help protect developing nations from the exploitation and dominance of large multi-national corporations, and this would be a highly effective project, in my opinion, for EA. Yes, this is the opposite of "free trade," but this is where the facts and theory lead us, in my opinion. 

For much more details and proper citations and support for my ideas, read my paper on competition here: theroadtopeace.blogspot.com

In contrast to "Do the most good", which I see alongside "Compassion" and "Loving Kindness", I see clearly the two wisdom traditions I have leaned into: Zen, the Soto School of Dogen Kigen (aka Dogen Zenji) and the Vajrayana of Karma Kagyu. (More precisely, the Mahayana on which those practices rest.)
What I see, rather than the high virtues of heroism, is a simpler set:
First, "Cease from evil" ... only then "Do good" and only ultimately "Do good for others".
And then, in the tradition of prajna/upaya  and bodhisattva aspiration: solidarity/empathy.

So much easier to markety glory and fame! so much better to adopt the practicality of reasonable humility.

--Karma Chöpal (Ben Tremblay, from FB)

Addendum: "Don't be lucid and ironic. People will turn that against you, saying 'You see? I told you he wasn't a nice person!'"

p.s. "Aim to explain"? you mean rather than explore and inquire? lecture, rather and discourse? how tragically revealing ...

Too short / too abrupt / cryptic ... or, too long winded / TL;DR / pompous and arrogant.
Something I brought to the table in context of "cognitive interview" with Law and Psychology ... criminology: 5 people responding to an event, at least 8 different stories.

Perhaps this sets the tone:
In the wisdom tradtion I try to follow, "bliss itself can become an obsactle".
We each and every one of us strive to be happy. Isn't it sensible to aim for that by trying to be free from the distortions of fear and pain?

I'm sure the view from Mt Everest is quite wonderful. But I make no plans of taking myself to Base Camp!

/me recalls Marpa's advice to Milarepa: "Wonderful vision. Now, go back and just sit."
/me also recalls Chuang Tzu's answer to the Emperor's messenger. "Here, have some tea!"

More from Naryan
Curated and popular this week
Relevant opportunities