Aside: This is in a series of posts I'm considering writing
I'm posting these publicly to potentially get feedback and links to other posts/essays that may be relevant or precede these. Another concern is whether these are even relevant for EA Forum or perhaps for LessWrong instead or perhaps neither.
Do we plan and document too much or too little?
One of my frequent takes is that we (EA? Researchers? People in my circle?) Tend to charge forward too quickly without 1. Defining our plan or 2. Keeping track of what and documenting what we are doing. 
A common response: these processes take too long, are way too slow and usually end up never getting used. Learning how to do this planning and documentation takes a long time in itself.
Biases towards under-planning and under-documentation
Planning and documentation...
The time costs of planning and documentation are very visible and are seen right away. Furthermore, the thought that this work 'may not be used or recognized' may be particularly painful.
*However, the benefits (and avoided cost) may be underestimated because... *
- They are only seen much later, e.g.,
when we come back to re-plan the second version of a campaign in another context, reanalyze the data to check an anomaly someone picked up
when we realize that the extensive work we have done was also done by a siloed group on the other side of the world
when we face a large disaster or massive ability to expand
- They are 'externalized'
- by other people organizations we care about using our work or forking it
- by other divisions of our organization that don't fully recognize the benefit
- by ourselves or a successor group long in the future
- They are seen through 'avoidance of disaster'.
If an avoidable disaster occurs and this causes the group to break up, the person who advocated poor/better choices cannot be rewarded or punished. There may also be information loss here, so the 'lessons' are not spread to others in the space.
- These benefits are low-probability but high impact (AKA 'hit-space')
Perhaps 90% of the time the documentation is not useful, but for the remaining 10% of the time it has a huge benefit. Perhaps a large success or failure makes replication particularly important. Perhaps a small potential error found in the program renders it extremely unclear if it has any value. Perhaps a larger organization was working on a very similar project. Perhaps the program seems to be vastly successful and has a small time window to be able to be put into policy, but this can only be done if you can clearly demonstrate all the steps you took.
- Poor attribution and overemphasis on the role of chance
There's an admirable push in EA to recognize the role of uncertainty in successes and failures, and that the optimal approach often involves funding many projects that will fail. However, some types of failure might be falsely attributed to randomness. Sometimes greater planning could have either turned the project around. Sometimes you could have recognized a low chance of success and a negative expected value. If we overemphasize chance we de-incentivize this sort of planning.
The lazy attractor
This may also fall under the category of a bias I'd like to discuss further perhaps in a different post: we are especially attracted to arguments that 'the easier/lazier thing to do (here, not document) is probably better anyways'. I think this is enhanced by groups and norms; if others will not hold standards then the standards have less value. And we may even be keen to punish the overachieving strivers here, or the ones that seem to be scolding us for being lazy.
Biases towards over-planning and over-documentation/ inaction bias
[to be continued]
This comes somewhat out of my experience with 'replicable' coding and processes for data science and social science. ↩︎
I'm lumping these together here , but I should define and separate these later. ↩︎
Sorry, I sound like your Dad now when he says 'good luck occurs to people who worked for it, or whatever that expression is. ↩︎