Joey

Co-founder of Charity Science, a nonprofit that runs multiple projects, including Charity Science Health (a Givewell incubated charity) and Charity Entrepreneurship (a program to help new charities get founded).

Joey's Comments

What should Founders Pledge research?

Here are a few different areas that look promising. Some of these are taken from other organizations’ lists of promising areas, but I expect more research on each of them to be high expected value.

  • Donors solely focused on high-income country problems.
    • Mental health research (that could help both high and low income countries).
    • Alcohol control
    • Sugar control
    • Salt control
    • Trans-fats control
    • Air pollution regulation
    • Metascience
    • Medical research
    • Lifestyle changes including "nudges" (e.g. more exercise, shorter commutes, behaviour, education)
    • Mindfulness education
    • Sleep quality improvement
  • Donors focused on animal welfare.
    • Wild animal suffering (non-meta, non-habitat destruction) interventions
    • Animal governmental policy, particularly in locations outside of the USA and EU.
    • Treat disease that affects wild animals
    • Banning live bait fish
    • Transport and slaughter of turkeys
    • Pre-hatch sexing
    • Brexit related preservation of animal policy
  • Donors focused on improving the welfare of the current generation of humans.
    • Pain relief in poor countries
    • Contraceptives
    • Tobacco control
    • Lead paint regulation
    • Road traffic safety
    • Micronutrient fortification and biofortification
    • Sleep quality improvement
    • Immigration reform
    • Mosquito gene drives, advocacy, and research
    • Voluntary male circumcision
    • Research to increase crop yields
Update on the Vancouver Effective Altruism Community

Slight correction: The Charity Entrepreneurship program will be based in London, UK this year.

A guide to improving your odds at getting a job in EA

When I was writing this, I was mostly comparing it to other highly time consuming activism (e.g. many people are getting a degree hoping it will help them acquire an EA job). In terms of being the optimal thing for EA organizations to look for, I do not really have a view on that. I was more so hoping to level the understanding between people who have a pretty good sense that this sort of information is what you need, and people who might think that this would be worth far less than, say, a degree from a prestigious university.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

Ok given multiple people think this is off I have changed it to 3 hours to account for variation in application time.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

My sense is they already had a CV that required very minimal customization and spent almost all the time on the cover letter.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

The following is a rough breakdown of the percentage of people who were not asked to move on to the next round in the Charity Science hiring process. These numbers assume one counterfactual hour of preparation for each interview and no preparation time outside of the given time limit for test tasks.

~3* hour invested (50%) - Cover letter/resume
~5 hours invested (20%) - Interview 1
~10 hours invested (15%) - Test task 1
~12 hours invested (5%) - Interview 2
~17 hours invested (5%) - Test task 2
~337 hours invested (2.5%) - paid 2-month work trial
Hired (2.5%)

So, 95% of those not hired spend 17 hours or less, 85% spend 14 hours or less, and 70% spend 5 hours or less.

*changed from 1 hour to 3 hours based on comments

Why we look at the limiting factor instead of the problem scale

Hey Abraham,

The endline goal of any piece of evaluation criteria is to be able to be used to best predict “good done”. I broadly agree that one criteria factor is unlikely to rule in or out an intervention fully (including limiting factor - it was one of four in our system). If we know a criteria that was that powerful there would be no need for complex evaluation.

Although limiting factor is not a pure hard limit I do not think this changes its usefulness much; an intervention might be low evidence, and in theory multiple RCTs could be done to improve this, but in practice if there is say a limiting factor on funding (such that multiple RCTs could not be funded) the intervention might be indefinitely low evidenced even if in theory evidence is not an independent of movement factor. It seems fairly clear that all things being equal running an intervention will be easier than running an equivalent intervention that also requires you to build a field of talent or otherwise work on a limiting factor.

In principle I think this could be put into a more numerical form (e.g. included in CEA), but I think in practice this has not been done. Historically maybe the closest is different levels of funding gaps that Givewell has put for there top charities, but that is mostly considering a single possible limiting factor (funding). I would love to see more models on limiting factor and think it would be a natural next step in the current EA talent vs funding conversations.

A different way to think about this question is do we think problem scale or limiting factor are better predictors of areas where the most good can be done? I pretty strongly disagree that problem scale is more important than the limiting factor that will hit an intervention. Theoretically scale of the problem is a harder limit but that doesn't really matter if in practice an intervention is never capped by it. We ended up looking at quite a number of charities to consider what was stopping them (including GiveWell and ACE recommendations) and none of them seemed to be capped by problem scale, they had all been stopped by other limiting factors far before that became an issue (for example, with AMF it was funding and logistical bottlenecks not the number of people with malaria). I think this is even true for the specific case of wild suffering interventions. The absolute number of bugs does not matter much when considering ethical pest control so much as the density per hectare of field or the available funding for a humane insecticides charity. You could imagine a world where the bug populations of colder locations (such as Canada and Russia) where close to 0 and it would do very little to affect the estimated good done- broadly due to having a ton of work to do in warmer locations before one would expand to Canada and likely hitting many limiting factors before expanding that far. How soon these problems hit would be more predictive of impact than if there were twice or half as many bugs in the world as there are now.

I think historical evidence like “if this was not done X would not never have happened” is not a very strong argument unless some research is done systematically and compares both the hits and misses that occured (e.g. there where a lot of issues that were attempted to be founded but never got traction at that same point in time). To take a more clear example you could look a friend who won the lottery, and although clearly he benefited from his ticket it still would have been the wrong call from an expected value perspective to buy it, and certainly would not suggest you should buy a lottery ticket we have to be careful of survivorship bias. Mainly we are looking at factors that are predictive of something having the most impact and singular examples do not describe much about field building vs making quicker progress on a more established field. Although I would be really interested in more systematic research in this area.

Load More