Hide table of contents

This is a retrospective post for the Tools for Collaborative Truth Seeking project. I recommend reading it if you want to do a project involving writing tutorials for the forum, matched forum posts and live streams, or short projects involving user interviews. 

Some quick notes: the project took around 55 hours of work in total, and consisted of 8 forum posts and 8 live events in the EA GAtherTown where we discussed the use of the tool and used it for a short exercise. The posts were put up once a day over a week and a half, with the live streams initially the same day as the post, but by halfway we changed to the day after. 

I feel the two key mistakes were not understanding where the list of tools had come from and not re-assessing as the project grew. I'll also mention a couple of more specific points about the format and timing of the final product. 

Key Mistakes

I feel the biggest mistake I made with this project was not asking where the list of tools had come from. Even assuming it was from a great source, not being able to regenerate the method really limited my ability to write about the true reasons why a tool was on the list, and how it compared to other tools[1]

The other really central mistake was that the project changed quite a lot over time, and we never stopped to re-assess. Initially, the project was just the wiki page, then we added a sequence of posts to explain the tools, then we added livestreams.  At some point, the project had changed enough that we should have stopped and looked at the project with fresh eyes. 

I think this affected the quality of the final product; it was less cohesive than it should have been, and the timing was a little off[2]. An (untested) rule of thumb I will be trying to implement is to re-assess all of your assumptions about a project when you make a decision that seems to at least double the time the project will take.

More Specific Mistakes

Scheduling

The tight schedule felt good generally to keep people interested, but the biggest scheduling mistake was that people didn't have enough time between the post going up and the event itself. Our best-attended event was Squiggle, the post for which went up a few days in advance of the event due to the weekend (though this was also our most popular post). I suggest posting event announcements right at the beginning of the project, so people can book time into their day to come to live events.

The other minor issue was that some people who came to several events said they wished they had been spaced out slightly more; maybe every other day. I think this would be particularly important for projects longer than ours or where you're expecting the same people to turn up to several live events. 

Livestreams

Attendance for some livestreams was low (0-1). The format of having an exercise to do seemed useful. The main mistake was not bringing the devs of the tools in earlier and advertising their attendance and/or getting them more involved in picking exercises that showcased the tools well.

User Interviews

User interviews were run less like writing feedback and more like software user testing. This worked well when we were able to get access to people who were the target audience, and was not so helpful otherwise; I recommend putting effort into reaching your target audience for this kind of interview. People who understood this was the type of feedback we were looking for were also more helpful.

LessWrong

The posts generally did very poorly on LessWrong. This could be because the titles followed more of the forum format (very explicit, not weird at all), or because LW is more about working on AI than the forum. I think the two mistakes here were not talking to the LessWrong team about the project and just cross-posting rather than having a version tailored to LessWrong (e.g. with more AI examples, more individual working).

Conclusion

Overall, I was quite happy with how the project went. The key metrics will be how many people are using these tools in 6 months and what they're using them for, but I'm cautiously optimistic. The key mistakes were not re-assessing the project when it grew, and not thinking carefully about how we formed the list of tools that were included

  1. ^

    For those interested, the initial list came from a brainstorm on an epistemics slack, though it was altered quite substantially through the course of the project

  2. ^

    e.g. I wish we had posted all of the event announcements a week or two in advance of the posts going up, so people could plan ahead

Comments4
Sorted by Click to highlight new comments since: Today at 10:54 PM

Thanks for doing this! I appreciate the willingness to think about mistakes, but for what it's worth I would also be interested to hear what things well. At the end you allude to "key metrics" that haven't resolved yet, but it might be worth sharing what the initial results are?

Hi! Sorry for the delay in replying -- we've now posted the metrics, if you're interested.
 

I just want to shout out and say that, although my only interaction with the series has been through reading the posts, it has had a big impact for me and my workplace. Several coworkers and I now regularly use Excalidraw and Guesstimate. This has shaved about ~1 week off of a large project. Just napkin math here but we estimate our project will save 50 lives/week with a 1 in 10 chance of success. So by those numbers I'd say you saved about 5 lives with this project. Anyway, just napkin math, but I just wanted to say thank you!

Thanks for sharing!

More from brook
Curated and popular this week
Relevant opportunities