I'll take help recoining the term 'Contextualized Worthiness'. Curious for thoughts, feelings, criticisms!
More on my coaching trials with a dozen EA leaders here
My guess is that 80K is likely unaware of this, but the concept of 'replaceability', or at least as my clients almost exclusively seem to interpret it, seems to wreak havoc as a mental model on people's self-approximations around whether they should be taking on/staying in a given role. I see lots of evidence that it can even be ongoingly corrosive for those holding a role over a long period of time.
This feels like a big problem. In fact, I’d go as far as to say that I believe it’s a primary culprit for imposter syndrome and decision paralysis in EA.
Anxieties around replaceability are often delivered to me as a completely decontextualized hypothetical exercise, which is "is it possible that there's someone in the world who would be better at this role than me? If so, me taking this role could be critically bad for the world." The weight of this is likely exacerbated in leadership positions.
Putting high credence into decontextualized replaceability arguments seem obviously flawed to me, but more importantly, seems to me to have the psychological effect of egregiously warping risk calculations around career exploration, patient accumulation and consolidation of career capital, and particularly willingness to assume responsibility and take action.
You can basically condemn yourself (internally & socially) as a bad person for taking a role.
A thing that I believe calibrates people better would be called something like “contextualized worthiness” considerations.
Here’s a handful:
- Distinguishing between ‘being quite well-placed contextually’ with ‘being best-placed decontextually’
- Comparable talent landscape – where do comparable people seem to concentrate? Does it seem like none/some/most/all would be interested in taking on this role?
- Whether you were selected out of a search process
- Whether the costs of running an expansive (or infinite) search process are too prohibitive to make a move
- The importance of in-network trust in having you assume the role (versus a hypothetical ideal stranger), especially if respected individuals in a domain are actively encouraging you
- Nearly everyone is pretty poor at approximating skill at a distance, or even knowing what it takes to be great at something, unless it can be identified by highly specialized credentials (and _even then_…) Seems even more true with complex leadership & generalist positions.
- Whether the position/org would have existed counterfactually
- Institutional knowledge – being in the role/org for a reasonable amount of time + rapport and sync with surrounding individuals is not trivial for the ideal stranger to attain
- Reversibility (sort of) – can you run a set of experiments in a fixed amount of time that will allow you to relinquish the position/fold the project if in fact you weren’t a good fit
- Grace – having grace and forgiveness for yourself if you truly thought something through, took action and happened to be wrong
- [the list can continue]
The above ‘contextualized worthiness’ considerations often have the effect of getting people to track more inputs from reality, rather relying too heavily upon an abstract thought exercise that yields an absurdly high bar for action, and often bottoms out in a nasty set of implications for any misstep.
If 80K doesn't already plan to do this, a suggestion for remedial action would be an additional series of posts nuancing this concept for people. Many people I speak to could use this.
To their credit, they do seem to have addressed some misconceptions and attempted to nuance it a long time ago: https://80000hours.org/.../replaceability-isnt-as.../....
I'd wager that corrections would probably stick better in the community if misconception examples and even anonymized case studies were included.
Nonetheless, I still think the point about how trying to correct how things spread mimetically still holds. If we keep seeing evidence that very unhelpful versions of this are still floating around, and that it severely affects (potentially) important people, more should be done to address this
It could be claimed that this bastardizes the concept, but how concepts are originally designed and how they spread mimetically are very different.
Worse still, interacting mental models often reinforced by EA can make people feel morally very bad for inaction
Someone made the point that knowing how messages will spread is quite hard. Were I speaking to someone from 80K, I hope for the tenor of my message to be "hey, I get that mass media is hard, and you've largely been doing great, but we've potentially (re)discovered something pretty big as a result of your messaging. Would you seriously consider following up here?"