# Summary

• The iteration of September 2023 of the The Introductory EA Program had 1.31 quality-adjusted attendances per hour running the program.
• The 80,000 Hour’s 1-on-1 program in 2022 did 0.109 quality-adjusted calls per hour running the program.
• It would be nice to have data on the benefits, cost and cost-effectiveness across programs run by CEA as well as national, city and university groups. I do not know to what extent groups are already doing calculations like the ones I did in this post, but I guess there is room to do more.
• I would say CEA could help establish common reporting frameworks by type of program, and advise groups on how to integrate them in their work.

# Introduction

The Introductory EA Program and 80,000 Hours’ 1-on-1 program seem to be 2 of the most popular efforts to build effective altruism. So I thought it was worth doing some Fermi estimates of their cost-effectiveness as a sanity check. I also share some quick thoughts on having more estimates like the ones I got.

# Cost-effectiveness

## Benefits

The iteration of September 2023 of the The Introductory EA Program, the last for which there was data on the Centre for Effective Altruism’s (CEA’s) dashboard on 6 December 2023, had 543 (= 654*0.83) quality-adjusted attendances. I calculated this multiplying:

• 654 total attendances (= 15 + 2*16 + 3*18 + 4*18 + 5*16 + 6*15 + 7*25 + 8*17), equal to the number of participants times the mean number of sessions attended per participant (although I did not calculate it this way).
• 0.83 quality-adjusted attendances per attendance (= (8.3 - 0)/(10 - 0)), which I computed from the ratio between:
• The difference between the actual and minimum possible satisfaction scores.
• The difference between the maximum and minimum possible satisfaction scores.

80,000 Hours’ 1-on-1 program had 1.16 k quality-adjusted calls[1] (= 1425*0.813) in 2022. I determined this multiplying:

• 1,425 calls.
• 0.813 quality-adjusted calls per call (= (5.88 - 1)/(7 - 1)), which I estimated in the same way as the quality-adjusted attendances per attendance (see above), but using 80,000 Hours’ usefulness rating.

Note the benefits I describe here concern outputs, whereas cost-effectiveness analyses should ideally focus on outcomes which are more strictly connected to contributing to a better world, like starting a new impactful job. Niel Bowerman pointed to Open Philanthropy’s 2020 EA/LT Survey. Cian Mullarkey from CEA suggested looking into the section on positive influences of the 2022 EA Survey to get a sense of the outcomes[2]. I encourage CEA, Open Philanthropy, 80,000 Hours, or a reader interested in a quick estimation exercise to do this.

## Cost

I think it takes 415 h to run an iteration of The Introductory EA Program. I obtained this multiplying 8 weeks by 51.9 h/week (= 10 + 20 + 21.9), which is the sum between:

• 10 h/week to run the program besides preparing and facilitating sessions. Yve Nichols-Evans from CEA said:
• “We don’t have all this information in an easy to access way - but generally it takes ~10hrs to run the program per week besides preparing and facilitating sessions because we have so many participants and ex-participants (around 200-300 participants every month)”.
• 20 h/week (= 2*10) to prepare sessions, multiplying:
• 2 h/week/facilitator (= (1 + 3)/2) to prepare sessions. Yve said:
• “We advise facilitators that preparing takes 1-3 hours a week depending on how familiar you are with the content already”.
• 10 facilitators. Yve said:
• “~10 facilitators”.
• 21.9 h/week (= 1.25*17.5) to facilitate sessions, multiplying.
• 1.25 h/week/cohort (= (1 + 1.5)/2) to facilitate sessions. Yve said:
• “Facilitating the weekly discussions takes 1 to 1.5 hours a week for each cohort you take on”.
• 17.5 cohorts (= (15 + 20)/2). Yve said:
• “Generally we have around 15 - 20 cohorts”.

I believe 10.6 kh (= 5.3*40*50) were spent running 80,000 Hours’ 1-on-1 program in 2022. I got this by multiplying 5.3 FTE by 2 kh/FTE (= 40*50), in agreement with the name 80,000 Hours coming from assuming 40 h/week and 50 week/year (as well as 40 year/career).

Note there is a meaningful difference between the 2 programs above with respect to the fraction of the time which is paid:

• For The Introductory EA Program, it is 64.5 % (= (10 + 0.56*41.9)/51.9), assuming:
• The time of 10 h/week to run the program besides preparing and facilitating sessions is paid.
• 56 % of the time of 41.9 h/week (= 20 + 21.9) to prepare and facilitate the sessions is paid, as that is the fraction of facilitators who are paid.
• Cian shared that, “historically, we [CEA] have paid facilitators to facilitate roughly 56% of cohorts”.
• I like transparency, so I am glad Cian shared the above.
• For 80,000 Hours’ 1-on-1 program, arguably close to 100 %.

## Cost-effectiveness

The iteration of September 2023 of The Introductory EA Program had 1.58 attendances (= 654/415), corresponding to 1.31 quality-adjusted attendances (= 543/415), per hour spent running the program. 80,000 Hours’ 1-on-1 program in 2022 did 0.134 calls (= 1425/(10.6*10^3)), corresponding to 0.109 quality-adjusted calls (= 1.16*10^3/(10.6*10^3)), per hour spent running the program.

Supposing internal capacity is the bottleneck to scale the programs, which I think is true at least for the one of 80,000 Hours, I guess the marginal cost-effectiveness is 0 to 1 times the ratio between benefits and cost, so I speculate it is 0.5 (= (0 + 1)/2) times it based on a uniform distribution. If so, spending an extra hour running:

• The Introductory EA Program would result in 0.655 additional quality-adjusted attendances (= 0.5*1.31).
• 80,000 Hours’ 1-on-1 program would result in 0.0545 quality-adjusted calls (= 0.5*0.109).

# Quick thoughts on having more estimates like the above

It is great that CEA has a dashboard with many metrics related to the benefits of its various programs. I think it would also be worth including metrics related to cost and cost-effectiveness. CEA’s 2023 spending on each program is available, but it would be valuable to have data across time. 80,000 Hours is a good example here, sharing metrics related to both the benefits and costs of their programs since 2011, although there is little data for the early years.

More broadly, it would be nice to have data on the benefits, cost and cost-effectiveness across programs run not only by CEA, but also national, city and university groups. I do not know to what extent groups are already doing calculations like the ones I did in this post, but I guess there is room to do more. The programs will often not be directly comparable. For example, 80,000 Hours’ 1-on-1 program is arguably further down the funnel than The Introductory EA Program. However, sometimes they will. For instance, my sense is that many groups run programs with a similar format and goal of The Introductory EA Program. Best practices could eventually be found by looking into the programs doing well (having high cost-effectiveness), although one would have to be mindful there are other factors which influence how well a program does besides how well it is run.

There are some hurdles to putting the above vision into practice. My sense is that groups are not tracking super well inputs, outputs, nor outcomes (e.g. career changes)[3].

It would also be challenging for groups to coordinate to come up with comparable metrics. I would say CEA could help establish common reporting frameworks by type of program, and advise groups on how to integrate them in their work. Cian commented that CEA might work on this.

# Acknowledgements

Thanks to Yve Nichols-Evans for sharing estimates related to the time spent running The Introductory EA Program, and to Cian Mullarkey for sharing data on the satisfaction score of its participants. Thanks to Cian and Niel Bowerman for feedback on the draft. Thanks to Michelle Hutchinson for looking into the draft.

1. ^

80,000 Hours also makes introductions of their advisees “to experts and hiring managers in relevant fields”. I do not know the extent to which the number of quality-adjusted calls correlates with the number of such introductions, but there is no data on these in the sheet with 80,000 Hours’ historical estimates.

2. ^

Cian also commented that:

I think this is important because I don’t think the influence of the program scales linearly with the number of hours of engagement.

3. ^

Cian commented:

I think groups are probably doing a better job of tracking outcomes than inputs and outputs. To expand on this a little -- I think that groups are generally aware of when a group member has benefited a lot from their services, because they’ll see that they have gone on to do cool things / take actions, and there is generally not that many stories to track, whereas inputs and outputs are messier and tracking them requires that you do good planning + keep track of your time/progress.

# Reactions

Sorted by Click to highlight new comments since:

Very interesting!

Thanks for the writeup

I'd be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)

Also, curious how you would think about the added value of a career call or participation in a program? Given that a person made a career change, obviously the career call with 80k isn't 100% responsible for the change, but probably not 0% either (if the call was successful).

Thanks for the comment, Ezrah!

I'd be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)

Yes, I think career changes and additional effective donations would be better proxies for impact than outputs like quality-adjusted attendances and calls. Relatedly:

Animal Advocacy Careers (AAC) ran two longitudinal studies aiming to compare and test the cost-effectiveness of our one-to-one advising calls and our online course. Various forms of these two types of careers advice service have been used by people seeking to build the effective altruism (EA) movement for years, and we expect the results to be informative to EA movement builders, as well as to AAC.

We interpret the results as tentative evidence of positive effects from both services, but the effects of each seem to be different. Which is more effective overall depends on your views about which sorts of effects are most important; our guess is that one-to-one calls are slightly more effective per participant, but not by much. One-to-one calls seem substantially more costly per participant, which makes the service harder to scale.

There therefore seems to be a tradeoff between costs and apparent effects per participant. We’d guess that the online course was (and will be, once scaled up) slightly more cost-effective, all things considered, but the services might just serve different purposes, especially since the applicants might be different for the different services.

Also, curious how you would think about the added value of a career call or participation in a program? Given that a person made a career change, obviously the career call with 80k isn't 100% responsible for the change, but probably not 0% either (if the call was successful).

AAC's studies had a control group, so they provide evidence about the counterfactual impact of their one-to-one advising calls and online course. 80,000 Hours' has a metric called discounted impact-adjusted peak years (DIPYs) which accounts for which fraction of the career change was caused by them.

Curated and popular this week
Relevant opportunities