Hide table of contents

TL;DR: Taking an hour to reflect on how effective you were this year and how you can use this awareness to shape your behaviors for next year is likely one of the highest leverage uses of your time. 

This tool is designed to give you a structured, yet personalized way to do that. It can also be used by groups for collective and individual reflection.


End-of-Year Effectiveness Check-In: A Tool for Personal Reflection 

Intro 

For the past 8 years, I have taken time at the end of every year to reflect on what went well, what didn’t, and how I want to shape my intentions for the coming year. I’ve done this on mini solo retreats, using Year Compass, with friends, and on planes. This year, I wanted to design something specifically for the EA community around the core ethos of bringing our resources into greater alignment with our values. 

Suggestions for using this tool individually:

  1. Set aside 1-3 hours of deep focused time.
    1. Spend ~1/3 of your time on each section, with Step 3 likely being the most important. 
    2. **You can print out this tool if you prefer, just scale printing to the width of the page and print each tab.
  2. Pick the top 2-3 intentions going forward and share them with someone who can support you in prioritizing them.
  3. Put these intentions in a place you’ll encounter them regularly. This could be recurring events on your calendar, printing them and putting them on a wall, adding them to your priority management systems, etc. 

 

Ideas for using this in a group setting:

  1. Here’s a run sheet for a 1.5 hr event that supports a group of people using this tool.
    1. 5 mins - Intro: Overview the tool, why it’s valuable, and how the event will go.
    2. 40 mins - Begin: Complete the parts of Sections 1-7 that feel most relevant individually.
    3. 10 mins - Check in as a group: Hows the process going? Any questions or reflections that may support one another in using this tool well? 
    4. 20 mins - Final Section in Pairs: Pair up and spend 
    5. 15 mins - Share out & Debrief: What are 1-2 of your intentions? How can people present support you in achieving them? How did this process go? 

Invitations for Feedback & Further Exploration

If you have ideas for how to improve this tool, who might benefit from using this tool, or want to collaborate on making other tools that might support the EA movement, send me a message. I feel particularly motivated to explore ways the EA community can be supporting one another in capacity building, cultivating great self-awareness, and enhancing our ability to collaborate across differences.

11

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Woo! Thank you for sharing this! It looks awesome!

Thanks for making this! I also feel like I get a lot of value out of quarterly/yearly reviews, and this looks like a nice prompting tool. If you haven't seen it already, you might like to look at Pete Slattery's year-review question list too!

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 1m read
 · 
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal