Michael Townsend

Researcher @ Giving What We Can
2340 karmaJoined Oct 2018Working (0-5 years)Seaforth NSW 2092, Australia

Bio

Researcher at Giving What We Can.

Comments
80

Topic Contributions
1

Thanks Vasco, this is good feedback.

To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/fund "global health and wellbeing" to "global human health and wellbeing"

We considered a wide variety of names, and after some deliberation (and a survey or two), we landed on "global health and wellbeing" because we think it is a good balance of accurate and compelling. I agree with some the limitations you outlined, and like your alternative suggestion, especially from a "Researcher's" point of view where I'm very focused on. I'll share this with the team, but I expect that there would be too much cost switch at this point. 

However, I wonder how much of your and Sjir's views are being driven by path dependence. [...] Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview.

It's a bit tricky to respond to this having not (at least yet) done an analysis comparing animal versus human interventions. But for if/when we do, I agree it would be important to be aware of the incentives you mentioned, and to avoid making decisions based on path dependencies rather than high quality research. More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So we'd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.

Hi Rebecca — we did not look into The Life You Can Save for this round. As shared here we only looked into the six evaluators/funds listed in this post, and in our "Why and how GWWC evaluates evaluators" we shared how we decided which evaluators to prioritise. It's too soon to say which evaluators we'll look into next, though we can share that our current inclination is that looking into Founders Pledge's research, and expanding the cause areas we include (like climate change, or "meta" work) is a particularly high priority. 

Thanks Nick! It was really illuminating for me personally to look under the hood of GW, and I'm glad you appreciated our summary of the work. 

In this round of evaluations, we only looked into Animal Charity Evaluators, GiveWell, Happier Lives Institute, EA Funds' Animal Welfare Fund and Long-Term Future Fund, and Longview's Emerging Challenges Fund. In future evaluations, we would like to look into Founders Pledge's work, climate change more generally, and other evaluators. It's too soon to commit to which, and in which order, just yet.

Also, did you evaluate GW's Top Charity Fund of All Grants Fund?

Jonas' reply is spot on here — we essentially looked into both and to GW more generally.

There is definitely substantial overlap between the four funds you listed; and especially on GWWC's fund and EA Funds'. In principle, it doesn't have to be this way:

  • GWWC's Global Health and Wellbeing Fund could potentially grant based on evaluations other than GW (e.g., potentially from Founders Pledge, or Happier Lives Institute, etc., depending on how our subsequent evaluations go).
  • EA Funds' Global Health and Development Fund could similarly appoint new advisors, or change its scope. But I can't speak on behalf of EA Funds!
  • GW's Top Charities Fund and All Grants Fund do make different grants, with the latter having a wider scope, but there is overlap.  

Have you considered allocating the donations made to the GWWC Global Health and Wellbeing Fund to GW's funds?

We expect that in effect this will be what happens. That is, we expect GW to advise our fund as a proxy for the All Grants Fund. Operationally, it's better for us to directly grant to the organisations based on GW's advice (rather than, for example, sending the money to GW to regrant it) so that, among other reasons, charities can receive the money sooner. We already have this process setup for donations made to GW's funds on our platform. This means that, at least right now, giving to either the All Grants Fund, or our cause area fund, will have the same effect. But as above, this could change based on future evaluations of evaluators, which we see as a feature for donors who want to setup recurring donations to track our latest research.

Thanks Peter, and we'd of course like to extend the thanks back to HLI for being such an excellent collaborator here! Congratulations on publishing your new research. I'm eager to read more about it over the coming weeks and hopefully to dive into it in more detail next year in our next round of evaluations. 

There is a relatively small comparison class here; we often say we’re focused on “impact-focused” evaluators. Here is our database of evaluators we know of that we might consider in this reference class. In the medium to long run, however, we could imagine there being value in investigating an evaluator completely outside EA, with a very different approach. There could be some valuable lessons of best-practice, and other insights, that could make this worthwhile. I expect we probably won’t prioritise this until we have looked into more impact-focused evaluators. 

Hi wes R, I'll answer your questions in this comment!

The impact measurements greatly varied by evaluator. For example, GW makes decisions using its “moral weights” (which primarily measures consumption and health outcomes, but I don’t believe in a way that neatly reduces to QALYs). Meanwhile, HLI uses “WELLBYs”. Other evaluators used different measurements at different times, or relied on subjective scores of cost-effectiveness. You can read more about these in our evaluations (linked to here). 

I’m not sure we have much in the way of a generalised view of which metrics we think should be used or not. In general:

  • These metrics should help support making more cost-effective recommendations and grants.
  • To the extent they do, we’re happy to see them!
  • In some cases, metrics might end up forcing over-precision in a way that is not particularly helpful. In these cases, we think it could be more sensible to take a more subjective approach.

Hope that helps!

This is a really insightful question! 

I think it’s fair to characterise our evaluations as looking for the “best” charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though we’re looking to recommend the best charities, we don’t think this means that there’s no value in looking into “great-charity evaluators” as you called them. We don’t have an all-or-nothing approach when looking into an evaluators’ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as it’s possible some of the recommendations of a “great-charity evaluator” are the best by a particular worldview, we’d see value in looking into them.

In one sense, this increases the bar for our evaluations, but in another it also means an evaluator’s recommendations might be the best even if we weren’t particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).

It’s too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors “maximise” expected cost-effectiveness, rather than “maximising” the number of donors giving cost-effectively / providing a variety of “great-but-not-best” options. 

You might also explicitly state that you don't intend to evaluate great-charity recommenders at least at this time.

As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview). 

but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators

I’d be interested in where you think we could improve our communications here. Part of the challenge we’ve faced is that we want to be careful not to overstate our work. For example, “we only provide recommendations from the best evaluators we know of and have looked into”, is accurate, but “we only provide recommendations from the best evaluators” is not (because there are evaluators we haven’t looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise — we also don’t want to understate our work!

Hi Vasco, thanks for your questions!

I’ll answer what I see as the core of your questions before providing some quick responses to each individually. 

As you suggest, our approach is very similar to Open Philanthropy’s worldview diversification. One way of looking at it is, we want to provide donation recommendations that maximise cost-effectiveness from the perspective of a particular worldview. We think it makes sense to add another constraint to this, which is that we prioritise providing advice to more plausible worldviews that are consistent with our approach (i.e., focusing on outcomes, having a degree of impartiality, and wanting to rely on evidence and reason).

I’ll share how this works with an example. The “global health and wellbeing” cause area contains recommendations that appeal to people with (some combination) of the following beliefs:

  1. We should prioritise helping people over animals
  2. Some scepticism about highly theoretical theories of change, and a preference for donating to charities whose impact is supported by evidence
  3. It’s very valuable to save a life
  4. It’s very valuable to improve someone’s incomes

People may donate to the cause area without all of these beliefs, or with some combination, or perhaps with none of them but with another motivation not included. Perhaps they have more granular beliefs on top of these, which means they might only be interested in a subset of the fund (e.g., focusing on charities that improve lives rather than save them).

Many of your questions seem to be suggesting that, when we account for consumption of animal products, (3) and (4) are not so plausible. I suspect that this is among the strongest critiques for worldviews that would support GHW. I have my own views about it (as would my colleagues), but from a “GWWC” perspective, we don’t feel confident enough in this argument to use it as a basis to not support this kind of work. In other words, we think the worldviews that would want to give to GHW are sufficiently plausible. 

I acknowledge there’s a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we don’t have a strong justification here. It’s a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations. We don’t feel well-placed to do this work directly (for much the same reason as we need to evaluate evaluators rather than doing charity evaluation ourselves). We would be open to investigating these questions further by speaking with organisations engaging in this cause-prioritisation — we’d love to have a more thoughtful and justified approach to cause prioritisation. In other words, I think you’re pushing on the right place (and hence this answer isn’t particularly satisfying).

More generally, we’re all too aware that there are only two of us working directly to decide our recommendation and are reluctant to use our own personal worldviews in highly contested areas to determine our recommendations. Of course, it has to happen to some degree (and we aim to be transparent about it). For example, if I were to donate today, I would likely give 100% of my donations to our Risks and Resilience Fund. I have my reasons, and think I’m making the right decision according to my own views, but I’m aware others would disagree with me, and in my role I need to make decisions about our recommendations through the lens of commonly held worldviews I disagree with. 

I’ll now go through your questions individually:

If someone wanting to donate 1 M$ who was not pre-commited to any particular area asked for your advise on which of your recommended funds is more cost-effective, and wanted to completely defer to you without engaging in the decision process, what would you say?

We’d likely suggest donating to our cause area funds, via the “all cause bundle”, splitting their allocations equally between the three areas. This is our default “set-and-forget” option, that seems compelling from a perspective of wanting to give a fraction of one’s giving to causes that are maximally effective from particular worldviews. This is not the optimal allocation of moral uncertainty (on this approach, the different worldviews could ‘trade’ and increase their combined impact); we haven’t prioritised trying to find such an optimised portfolio for this purpose. It’d be an interesting project, and would encourage anyone to do this and share it on the Forum and with us!

Are you confident that donating to the Animal Welfare Fund (AWF) is less than 10 times as cost-effective as donating to GiveWell's Top Charities Fund (TCF)? If not, have you considered investigating this?

We are not confident. This is going to depend on how you value animals compared to humans; we’re also not sure exactly how cost-effective the AWF Fund is (just that it is the best option we know of in a cause area we think is generally important, tractable and neglected).

If you thought donating to the AWF was over 10 times as cost-effective as donating to TCF (you may actually agree/disagree with this), would you still recommend the latter (relatedly)? If so, would you disclaim that your best guess was that AWF was significantly more cost-effective than TCF?

If we thought there wasn’t a sufficiently plausible worldview whereby TCF was the best option we knew of, we would not recommend it. 

Are you confident that donating to TCF is beneficial accounting for effects on animals? If not, have you considered investigating this? I did not find "animal" nor "meat" in your evaluation of GiveWell.

We did not consider this, and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.

If you thought donating to TCF resulted in a net decrease in welfare due to the meat-eater problem (you may actually agree/disagree with this), would you still recommend it? If so, would you disclaim that your best guess was that TCF resulted in a net decrease in welfare, but that you recommended it for other reasons?

As above, we would if we didn’t think there was a sufficiently strong worldview by which TCF was the best option we knew of. This could be because of a combination of the meat eater problem, and that we think it’s just not plausible to discount animals. It’s an interesting question, but it’s also one where I’m not sure our comparative advantage is coming to a view on it (though perhaps, just as we did with the view that GW should focus on economic progress, we could still discuss it in our evaluation). 

Load more