I have a potential opportunity to test how to improve institutional decision-making, in the real world, I thought this might be the right place to actually find some resources / advice on how to do the most robust test possible.
The background
For the last 10 years I have run my own business. I set it up to as a for-profit to “help the world make better decisions” but since then, I found few organisations would buy this, so I have ended up positioned as a coach for senior leadership teams. However, sometimes I still do some work directly on decision-making with for profit or non profit organisations.
The opportunity
Last year a SAAS (software as a service) company in Australia (where I live) approached me. They had noted in their internal engagement survey of staff that people complained about their decision-making processes.
They committed some funds to this, but they ended up just asking me to do a series of 1 hour keynotes. These may have had limited impact, and we didn’t measure any changes.
But recently I worked with their product and tech team and suggested some changes to their product design process to improve the decision-making.
The head of department loved it, and we have a meeting in around a week to discuss how we can actually implement the suggested changes
Why am I posting?
I’ve become very interested in focusing my EA journey on IIDM, and although this is clearly not a major institution like Meta or Amazon it is at least a tech company of sorts and it has several thousand employees. Therefore it might be a possible proof-of-concept.
Now although I have a background in psychology, which involved being exposed to RCTs and I basically understand that, I do not have the technical expertise to run a high quality test. For example, how best to set it up, how to statistically analyze the results.
So, I’m posting to see if there is accessible expertise here that might advise or point me in the right direction on how to create a robust test of IIDM which is a win-win-win for the organisation, myself and EA.
The risks
I estimate it is 10-30% likely that they will buy into some kind of proper test, however as you can see from above it’s also a lot more likely than just approaching any given organisation as they already have identified the problem, sourced an expert (me!) and identified a specific process to work on.
In addition, I am not currently proposing to build values alignment into this test, just improving their decision-making in one (large) team.
Interesting! Do you know the content of the complaints that people were making about the existing decision-making practices?
In general, some resources I've grown from reading about related to IIDM:
This is just a Nash equilibrium where everyone has incentives to perpetuate the status quo. Specifically, it's an equilibrium that the system is stuck with even if everyone learned that there existed a highly-certain-but-hypothetical Nash equilibrium that's better for everyone--they just can't easily switch to it because it requires simultaneous coordination.[2]
Thus we have the concept of "coordination activation energy/thresholds" which is an one-time upfront cost you have to pay to reach a higher Nash equilibrium. Perhaps the best and severely under-utilised tool I know of in order to overcome activation thresholds for coordination is the idea of assurance contracts (wikipedia).
If any of the institutional problems the company faces is the result of an inadequate equilibria that can be plausibly be addressed with an assurance contract, I'd be happy to help put you in touch with people who know more about it and (optionally) share my ideas for how to practically go about it.
Thanks Emrik, I'll check some of these out.
I'm conscious of it being a mid sized corporate - so I need to keep it pretty simple. I'm focusing on helping them improve their expected value calculations as they pursue new products and features. They call EV 'predicted ROI', which reminds me the importance of using their language and avoiding EA/philosophy language.
Next steps: I'll look for an academic partner to help create a robust study.