Fergus McCormack

Posts

Sorted by New

Wiki Contributions

Comments

Fergus McCormack's Shortform

I wrote a very rough draft of an idea I had. It was just a stream of consciousness and I didn't really edit it. I'm not sure what the standards are like on the EA Forum: I would like to invest time in developing this further, as well as other posts I could possibly write, but as I mentioned at the bottom of this post, I'm at a critical juncture in my career and need to invest my time and energy elsewhere.

If I can produce forum posts with potentially some interesting ideas, but that are of a relatively low standard, is is better for me to post these without worrying about developing them, or waiting until I have more time to invest in developing better forum posts/writing less and editing more?

 

Epistemic status: medium-low (I have some conviction in this idea, but I’m posting this without much rework in an attempt to overcome perfectionism, partially inspired by Neel Nanda. This likely means that some of the things in this post will be wrong, but I hope it could be useful nonetheless). 

Summary: we should systematically proceduralise activities more. Learning how to proceduralise tasks requires a lot of up-front investment. This could be reduced by centralised, shared ‘checklists’ for different tasks which are designed to increase effectiveness in particular tasks.

 

I’ve always had a preference for proceduralising (approaching tasks in a systematised manner), but my exposure to EA and the rationalist community has inundated me with high quality information about how to do effective research, learn effectively, prioritise effectively and make better decisions etc.

The bottleneck holding back my progress is now the implementation of ideas that I have high conviction in. To effectively implement these ideas and integrate them into my system 2 would require a lot of deliberate practice and highly engaged, focussed efforts.

I want to somehow outsource or expedite this process, and this seems like an important issue that is likely to be affecting other EAs as well. One potential solution is the more deliberate development of effective checklists, ideally some kind of centralised database.

Checklist have been advocated by Charlie Munger (Warren Buffet’s business partner) as an effective way of making better decisions and overcoming biases. They help ensure that we prime our brain to consider different important aspects of a problem that we might not naturally consider (discussed more thoroughly in https://www.amazon.de/-/en/Atul-Gawande/dp/0312430000).

My mental model of highly skilled workers in a complex field is essentially people who have internalised complex procedures internally, but that with some effort these mental processes could be proceduralised to a greater degree. Checklists do not have to involve mindlessly going through a recipe book: they could involve open ended questions designed to elicit deeper thinking.

 

Implementation

I’m still in the early phases of developing this idea, but I’ve started to implement it as part of my workflow. I use OneNote and have sections entitled ‘Working Best Practices’, ‘Research Best Practices’ etc. As an example, here’s something taken the page ‘when reading a research paper’ (I stole the questions from EA-related sources, mostly from: https://effectivethesis.org/resources/)

Short:

  1. Main question
  2. Methods used
  3. Data required
  4. Results

Long:

  1. Can the approach used in a study be applied to other contexts, countries or time periods?
  2.  What assumptions are implicit? 
  3. Are all the assumptions sensible? 
  4. To what extent might the results be sensitive to the assumptions? 
  5. How can they be relaxed? 
  6. Are there any unnecessary assumptions? 
  7. Is the approach used the most appropriate one? 
  8. Have new techniques been developed since the paper was first written? 
  9. Have all relevant statistical tests been carried out?
  10. Are the results consistent with expectations, or earlier work? 
  11. Are the surrogate or constructed variables the most appropriate for the task and can anything be said about likely biases? 
  12. Are there any implications of the study which have not been fully drawn out by the author? 
  13. Can these be exploited in your work?

 

With a checklist, rather than having to learn all of the questions to ask yourself when reading a paper, you just have to remember where to find these questions, which hopefully frees up time to focus on other things.

I imagine a centralised EA checklist, with different sections differentiated by clear cues. A section would have an inbox, where people add in ideas with low epistemic standards, to increase creativity. There would then be two  additional aspects to the section: a short version and a long version. The short version would highlight the best 4 parts of a checklist to follow, the long version would highlight the best 20, with voting to help ensure that the best ones rise to the top. Comments and feedback would also be useful as a way of developing our understanding of the pitfalls of certain procedures.

This approach might work well for activities where the order of actions is not important. However in many cases it is. Perhaps then a better solution would be for users to submit either individual items or whole procedures. Users would vote on individual procedures, which would be collated into a second inbox. From here, users could bundle procedures together to create ‘packages’, which could then be voted on.

These are early stage ideas and could almost certainly be improved upon. I would be happy to help develop this but I have limited programming experience so would struggle to implement as-hoc features. If anyone was interested in helping me implement this that would be great! I do think this has the potential to multiply EAs impact.

Caveats and downside risk: it’s possible that these checklists would act as a mental crutch and undermine the long-term development of people using them: learning the procedures of a task and internalising this kind of knowledge may facilitate greater creativity and better work in the long-run, and developing these checklists could end up being a distracting diversion for EAs and not a good use of time.

I think it’s worthwhile proceeding despite these risks: if they end up being true, I hope that this would be noticed by EAs, and users could be encouraged to learn the checklists rather than blindly apply them.

Another risk is that this could result in EAs approaching problems in a similar way, which could stymie productivity in the long run. This could be mitigated by having another section with more creative ways of approaching problems. But I think the gains to optimisation are so large that they’re much more important than the trade off between creativity and proceduralisation. The ‘efficient frontier’ comes to mind. If we reach a point where EAs are optimising consistently to a high degree, then we can start having more conversations about potential trade-offs.

 

Implementation

This is something I would be interested in taking on at some point, but I'm at a critical juncture in my career right now and need to devote my time and energy to figuring out my next steps (likely applying for Economics PhDs). 

I haven't yet looked into how this could be effectively implemented: it seems likely that there are online platforms that could incorporate this, but I don't have much expertise in the field.

If anyone is interested in collaborating on making this happen, please comment 'interested'.