Hi everyone,

The full review is here.

Below is the summary:

 ----

This year, we focused on “upgrading” – getting engaged readers into our top priority career paths.

We do this by writing articles on why and how to enter the priority paths, providing one-on-one advice to help the most engaged readers narrow down, and introductions to help them enter.

Some of our main successes this year include:

  1. We developed and refined this upgrading process, having been focused on introductory content last year. We made lots of improvements to coaching, and released 48 pieces of content. 

  2. We used the process to grow the number of rated-10 plan changes 2.6-fold compared to 2016, from 19 to 50. We primarily placed people in AI technical safety, other AI roles, effective altruism non-profits, earning to give and biorisk.

  3. We started tracking rated-100 and rated-1000 plan changes. We recorded 10 rated-100 and one rated-1000 plan change, meaning that with the new metric, total new impact-adjusted significant plan changes (IASPC v2) doubled compared to 2016, from roughly 1200 to 2400. That means we’ve grown the annual rate of plan changes 23-fold since 2013. (If we ignore the rated-100+ category, then IASPCv1 grew 31% from 2017 to 2016, and 12-fold since 2013.)

  4. This meant that despite rising costs, cost per IASPC was flat. We updated our historical and marginal cost-effectiveness estimates, and think we’ve likely been highly cost-effective, though we have a lot of uncertainty.

  5. We maintained a good financial position, hired three great full-time core staff (Brenton Mayer as co-head of coaching; Peter Hartree came back as technical lead; and Niel Bowerman started on AI policy), and started training several managers.

Some challenges include: (i) people misunderstand our views on career capital so are picking options we don’t always agree with (ii) we haven’t made progress on team diversity since 2014 (iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.

Over the next year, we intend to keep improving this upgrading process, with the aim of recording at least another 2200 IASPC. We think we can continue to grow our audience by releasing more content (it has grown 80% p.a. the last two years), getting better at spotting who from our audience to coach, and offering more value to each person we coach (e.g. doing more headhunting, adding a fellowship). By doing all of this, we can likely grow the impact of our upgrading process at least several-fold, and then we could scale it further by hiring more coaches.

We’ll continue to make AI technical safety and EA non-profits a key focus, but we also want to expand more into other AI roles, other policy roles relevant to extinction risk, and biorisk.

Looking forward, we think 80,000 Hours can become at least another 10-times bigger, and make a major contribution to getting more great people working on the world’s most pressing problems.

We’d like to raise $1.02m this year. We expect 33-50% to be covered by the Open Philanthropy Project, and are looking for others to match the remainder. If you’re interested in donating, the easiest way is through the EA Funds.

If you’re interested in making a large donation and have questions, please contact ben@80000hours.org.

If you’d like to follow our progress during the year, subscribe to 80,000 Hours updates.

10

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

After thinking about it for a while I'm still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing "the person not changing their career path". However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state "of the world" would be "someone else doing a similar work in a central EA organization". As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don't count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?

Another question is how do you estimate your uncertainty in valuing something rate-n?

Hi Jan,

We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.

The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-impact option might well 2x or even 10x their impact, which means you only need to reduce the estimate by 10-50%, which is a comparatively small adjustment given other huge uncertainties.

With the value of working at EA organisations, because they're talent constrained additional staff can have a big impact, even taking account of the fact that someone else could have been hired anyway. For more on this, see our recent talent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/ This showed that EA orgs highly value marginal staff, even taking account of replaceability.

Here is 80k's mea culpa on replaceability.

Sure, first 80k thought your counterfactual impact is "often negligible" due to replaceability, then they changed position toward replaceability being "very uncertain" in general. I don't think you can just remove it from the model completely.

I also don't think in the particular case of central EA organizations hiring the uncertainty is as big as in general / I'm uncertain about this, but my vague impression was there is a usually a selection of good candidates to choose from when they are hiring.

Thanks for this Ben. Two comments.

  1. Could you explain your Impact-Adjusted Significant Plan Changes to those of us who don't understand the system? E.g What does an "rated-1000" plan change look like and how does that compare to a "rated-1"? I imagine the former is something like a top maths bod going from working on nothing to working on AI safety but that's just my assumption. I really don't know what these mean in practice, so some illustrative examples would be nice.

  2. Following comments made by others about CEA's somewhat self-flagellatory review, it seems a bit odd and unnecessarily self-critical to describe something as a challenge if you've conscious chosen to de-prioritise it. In this case:

(iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.

By analogy, it's curious if tell you 1) a challenge for me this year was that I didn't run a marathon this year and 2) I decided running marathons wasn't that important to me (full disclosure humblebrag: I did run(/walk) a marathon this year).

Hey Michael,

A typical rated-1 is someone saying they took the GWWC pledge due to us, and are at the median in terms of how much we expect them to donate.

Rated-10 means we'd trade that plan change for 10 rated-1s.

You can see more explanation of typical rated 10 and higher plan changes from 2017 here: https://80000hours.org/2017/12/annual-review/#what-did-the-plan-changes-consist-of

Some case studies of top plan changes here: https://80000hours.org/2017/12/annual-review/#value-of-top-plan-changes

Unfortunately, many of the details are sensitive, so we don't publicly release most of our case studies.

We also intend for our ratings to roughly line up with how many "donor dollars" each plan change is worth. Our latest estimates were that a rated-1 plan change is worth $7000 donor dollars on average; whereas a rated-100 is over $1m i.e. it's equal in value to an additional $1m donated to where our donors would have given otherwise.

With the IASPC target, I listed it as a mistake rather than merely a reprioritisation because:

We could have anticipated some of these problems earlier if we had spent more time thinking about our plans and metrics, which would have made us more effective for several months.

Curated and popular this week
Relevant opportunities