Introduction
My estimated working life is long. I likely have around 40 productive years left. Given how valuable it is to achieve peak sustainable productivity, Joey and I have decided to implement an experiment. Theoretically, if you spent an entire year on productivity experiments and were able to increase output by 3% for the rest of your career, that would be worth doing, all things being equal. This is especially true because the experiment does not take away from high impact projects, but is run alongside them.
To do a strong trial you need three things 1) an idea worth testing, 2) a method for measuring productivity, and 3) a methodology.
An idea worth testing
There are millions of possible things that could affect your performance, and you cannot test them all at the same time. Sadly, there is little strong evidence on the specifics of how much a given technique increases output. Most of the research is very soft and focused on whether something helps at all without giving the size of the effect. Even if you had this data, due to huge interpersonal variability, it would be very hard to know how much things that work on others will work on you specifically.
Thankfully, this data is enough to generate a list of techniques that might help in significant ways. With a bit of research and a brainstorming session you should be able to list several dozen. Try to think divergently about your options. One angle might be to consider ways to save time, but you might also consider ways to improve focus or increase your energy levels.
A large list is great, but you will never have time to test everything that might work, so the next step is grouping and ranking your options.
Grouping and ranking
If you knew what techniques were best, you would not need to run the experiments in the first place, so really your rankings will be tentative guesses. Try not to get too attached to them.
You will be able to get a basic estimate of which ones to start with if you rank them by:
- How effective others seem to find them
- How effective you think it will be on you you relative to the normal population,
- How long/hard it would be to test,
A good one to start with might be something particularly easy because this will help you get into the habit of testing and give you an easy win to motivate yourself.
A method for measuring productivity
Measuring productivity is a challenge, particularly for more diverse jobs. For jobs where there is a clear metric for success (e.g. a factory) some metrics might be both fairly clear, easy to measure, and correlated pretty strongly with your endline goal and thus “true productivity”. For jobs with a variety of tasks, or where the tasks are less measurable (e.g. being a good manager), such evaluations are harder to make. Unfortunately, our job and most EA jobs fall much more in the latter category. Thankfully, we do not need a perfect metric of productivity, just some measurable proxies we would expect to correlate with it. When these are combined into an index you can get a decent sense of whether an experiment is working and how significant its effect is on you relative to the other experiments you are running.
If the index is weighted before you start gathering data, it’s harder to game the results. However, you might find that as the process goes on you will have to change the weightings or the metrics themselves to better reflect your more intuitive sense. For example, one score might correlate very strongly with another and thus will not be worth tracking. I also recommend doing a sanity-test by putting an absurd experiment (e.g. working 1 hour a week) and see how it scores in your system. If it scores well, you need to make some adjustments.
The index of variables I used was a combination of:
- My daily subjective productivity score
- Number of hours worked (using time tracking software)
- Daily independent assessment of my productivity by colleagues
- The number of unintended breaks/distractions
- Number of creative insights generated
- Number of missed deadlines
- Attentiveness during management Skype calls
- The number of changes suggested by the reviewers of my research reports
- My personal happiness and subjective enthusiasm for work.
We also flag in the notes section confounding variables that might introduce noise in the data, such as sickness. To analyze the results, we take the values and the rating on each criteria and create a normalized distribution for it so that the average is zero and each value is converted to a number of standard deviations above or below the mean. I expect each of these numbers to show an incomplete picture, but together I would put pretty high confidence that a week where we score well on all these metrics relative to baseline would be a more productive week. The numbers, as well as a notes section of relevant events, are recorded daily.
A methodology for testing
The final thing you need is a consistent methodology for testing. This makes it clearer when something works and allows you to build in more robustness by reducing the chance of noise influencing your results.
We plan on testing each method for one week. We will then compare the results to the previous week and the average of the last three months. Doing one experiment each week over three months will result in around twelve experiments. However, one week is not long enough. Novelty could easily cause all or most of the effect, and many unrelated things in life can happen that could affect things. A week where your productivity increases relative to average is suggestive but not enough. A re-testing of the most promising three experiments will allow the effect to be more easily discerned from the noise. This second test could be carried out for a longer time, say one month, with fewer experiments. During this time minor variations could be done. For example, if working at a coworking space yielded big improvements in the first week, during the month-long test you might try out different desks within the space. After six months, three of the experiments will have been double-tested, and nine more will have been tested once.
Afterwards, we will insert the data into a spreadsheet along with other factors, like cost, difficulty of implementation, and the value of an extra productive hour. This way, one can calculate which of the productivity experiments are worth integrating in the long run. If three or more members of the Charity Entrepreneurship team conduct experiments, that could start to inform others on what would be best to test or try first for themselves personally or at an organizational level.
Conclusion
We expect that the results of systematic testing like this could lead to large gains in productivity. It will also create generalizable data for others to learn from. Others using the same methodology could also contribute and collectively build a knowledge base about how to best improve our output as a community.
If you're taking suggestions for things to test, personally my (unquantified) single most successful productivity intervention yet has been putting a treadmill under my desk, and then stacking a box on the table to raise my laptop to elbow height.
My productivity per hour and general willpower to work is unchanged, but I'm now able to be on the computer for much longer hours at a stretch because I don't have to deal with the postural pain of sitting too long, and I no longer have to fight off my natural tendency to fidget and avoid being still. (I just switch from walking or standing to sitting or lying down as I get tired, with no interruption in work flow).
Perhaps more notable is the I think not unreasonable expectation that the additional 1.5-3.5 hours of walking per day it will ultimately increase my total number of productive years and decrease my sick days. That's not so easy to test on myself, but the benefits of walking are pretty well established. (Though of course the primary motivator there is not productivity, really)
I've also noticed improvements in baseline mood immediately after a long "walk", improving general physical stamina over time (e.g. I can walk farther without discomfort, I don't get as easily tired if I take on a task which requires being on my feet all day) and better lower body mobility and flexibility at the gym (e.g. lower and better form with squats).
Conflict of interest: it may end up being convenient for me if the CE office ends up getting a treadmill desk :P
Thank you for the suggestion. I'm always open for ideas on productivity improvements, especially if they directly affect charity entrepreneurs ;)
We generated a list of 100 ideas and prioritized them based on things like expected effect on the general population, on me and Joey, ease of testing, etc. As far as I remember, rotating positions from sitting on an office chair to standing to sit on a ball or laying on a couch are more strongly recommended than any single one of those. I think testing all of the tools you can use to be physically active would be an interesting separate experiment in itself. Have you ever tried a mini-stepper? How did you find the effects of a treadmill compared to a mini-stepper?
I haven't tried a mini-stepper! Next time I'm at the gym I'll check if they have one I can try. Even if it does not work as well, it would certainly be a lot cheaper and more portable.
Untested Speculation: People using steppers/bikes etc. might stop exerting conscious attention to move once they get sufficiently absorbed in their work. A special property of treadmills is that if you stop, you'll be carried backwards and away from your keyboard - this trains you out of stopping pretty instantly. Steppers/bikes/etc wouldn't automatically have this property - though perhaps one could mimic the training by adding a "don't stop!" signalling noise or something. Ultimately I think it's probably important that the movement not require much conscious attention.
Object level: I assume you are already familiar with The Productivity Project?
https://www.amazon.com/Productivity-Project-Accomplishing-Managing-Attention/dp/1101904038
The author attempts a series of productivity experiments, similar to what you are planning (if not quite as pre-registered and systematic).
Thank you for the recommendation. Yes, we took some ideas or their variations from that book.
Interesting approach - I look forward to seeing the results!
Do you plan to share your spreadsheet before then so others can begin taking the same approach for themselves more easily (and in a way that would allow their data to be combined with yours more easily)?
Thanks for posting your methodology ahead of time! I'm glad to see this "open science" practice on the Forum (knowing someone's research plans ahead of time lets you see whether they might be changing their methodology to bolster apparent results, so this post serves as an extra dose of integrity for the research).
The productivity-hacking movement is large and prolific, but they don't perform all that many systematic experiments. I look forward to seeing your results.
Thanks, I will definitely post the results. I also encourage others to test it as well, so we have more generalizable data.
Were results ever published?
I realize it was 4 years ago, but do you have a vague recollection of whether this turned out to be useful? Seems really interesting!
Do people have suggestions for productivity techniques that they would recommend trying in a system like this?
I work full time as a researcher and I do something a bit like a simplified version of the system in this post. However, I've run out of ideas of things to try. I'm wondering if any one has tried any variations to their routine or work habits that they have found particularly valuable and would be able to recommend trying?
Some things I've tried and found helpful, as examples (or in case any readers find them helpful suggestions):
(I could also share variations I've tested but haven't found useful, if anyone's interested in those too).
Things that have increased my productivity:
If you normally sleep less than seven hours a night, you could experiment with sleeping an extra hour every night for a week and see how it affects your productivity.