L

lsdoyle

2 karmaJoined Jun 2021

Comments
1


 

Thank you for sharing your thoughts Naryan. I agree with your main point that EA approaches are mostly limited to the complicated domain of the Cynefin framework. I have felt frustration with EA often focusing on complicated solutions that are easier to quantify and implement, rather than considering the complexity of the issue and taking a more preventative approach and intervening at a system level. And I think you’re right that complexity is needed “effective coordination in service of higher-leverage goals.”

However, I think both the Cynefin framework and the MHC will be useful in achieving this aim. 

The difference between these two approaches is that the Cynefin framework is a model that helps to classify systems and respond to systems, while MHC focuses on the cognitive ability to understand complexity. Obviously, people’s level on the MHC will impact their ability to accurately determine what kind of system they’re dealing with. 

Hence, MHC is a theory to describe and classify people’s differing ability to understand and conceptualise complexity. It is not an action orientated framework like doesn’t detail what to do about complexity, like the Cynefin model does. I think this is the issue you run into in your post when the education example, you ask “Which of these levels is ‘doing the most good’?” 

Cynefin can help you to understand more clearly the area you’re trying to operate within.  To see how the Cynefin framework can be applied, it might be useful to take a look at the EU Fieldbook Managing Complexity (and Chaos) in Times of Crisis

Regarding your point about that “not everyone is suited to managing each level”, within the Cynefin approach people are asked ‘what can you change at your level?’ so that they take action at the level at which they operate. For example, an employee taking action within their own team, someone taking action within their local community.  

Sounds like what you’re suggesting is that we can do the most good by helping people to see things at a higher level of complexity than they might be currently disposed to. Is that correct? This is a key aim outlined in Hanzi Freinacht’s The Listening Society.

Also, I believe Dave Snowden has critiqued MHC for being information/algorithmic-centric (I guess that’s why it’s attractive to the EA scene with it’s strong emphasis on measurability at the expense of other approaches) and using complexity in the dictionary definition which does not take complex adaptive systems (CAS) into account. But I can’t find much other info on this, can anyone help?