Joel Tan (CEARCH)

Founder @ CEARCH
757 karmaJoined Aug 2022


Founder of the Centre for Exploratory Altruism Research (CEARCH), a Charity Entrepreneurship-incubated organization doing cause prioritization research.

Once a civil servant, and then a consultant specializing in political, economic and policy research. Recovering PPEist who overdosed on meta-ethics.


CEARCH: Research Methodology & Results


Topic Contributions

Oh dear, sorry for the mistake. Thanks Jereon for flagging it, and Edo for fixing it!

I think the short of it is that trying to model the counterfactual impact of a reduction in diabetes prevalence on reduced COVID-19 burden would be too tough, relying as it does not just on complex epidemiological modelling but also inherently unknowable future scenarios. None of the experts we talked to raised this as a live issue, in any case, so my assumption was that post 2020-2022 it's not that significant compared to the global disease burden of DMT2 itself, especially on a long term basis.

Hi Ramiro,

Those are good questions!

(1) For substitution effects, we looked at (a) substitution with respect to home foods (i.e. the fear is we make outside food less sweet, so people just make their own food at home and add lots of sugar or sweet sauces); and (b) Substitution with respect to salty food (which leads to hypertension etc).

(a) For substitution with respect to home foods: We found that this is likely not a material risk insofar as:

  • (i) Our taste for sweetness is adaptive, and reducing sugar intake makes high sugar food taste too sweet even as low sugar food tastes sweeter than before. This is in line with what is the case for salt, where the phenomena of desensitization also exists.
  • (ii) A mass media campaign will be looking to address this precise issue, and to the extent we expect behavioural change with respect to highly processed food, we have equal reason to expect change with respect to seasoning of home foods.

(b) For substitution with respect to salty food: The evidence with respect on the cross-price elasticity is mixed, for as Dodds et al note: "A US study found that nutrient taxes targeting sugar and fat have a similar impact on salt consumption as a dedicated salt tax, likely because many foods, especially junk foods, that are high in sugar are also high in salt."; in contrast "A New Zealand experimental study ... [finds] that salt taxation led to a 4.3% increase in the proportion of fruit and vegetables purchased, but also a 3.2% increase in total sugars as a percentage of total energy purchases." Given this, we took that it would be reasonable to assume no net benefit or cost with respect to sugar consumption.

(c) For alcohol, we didn't look at this explicitly - though per Teng et al, the evidence is mixed, and from my own sense from going through the literature is that there is a small but significant substitution effective, and this will have to be modelled more explicitly going forward.

(2) On industry - we do look at the role of industry in general, and consider it an important factor that made us downgrade our estimate of the chances of advocacy success.

Yep! We looked at whether we are reducing people's pleasure from eating sweet food, but the evidence suggests this shouldn't be an issue - since our taste for sweetness is adaptive, and reducing sugar intake makes high sugar food taste too sweet even as low sugar food tastes sweeter than before. The upshot is that recalibrating everyone at lower levels of sugar will leave food tasting subjectively much the same over medium-to-long term.

This is in line with what is the case for salt, where the phenomena of desensitization also exists.

Just to clarify, one should definitely expect cost-effectiveness estimates to drop as you put more time into them, and I don't expect this cause area to be literally 1000x GiveWell. Headline cost-effectiveness always drops, from past experience, and it's just optimizer's curse where over (or under) performance comes partly from the cause area being genuinely better (or worse) but also partly from random error that you fix at deeper research stages. To be honest, I've come around to the view that publishing shallow reports - which are really just meant for internal prioritization - probably isn't useful, insofar as it can be misleading.

As an example of how we more aggressive discount at deeper research stages, consider our intermediate hypertension report - there was a fairly large drop from around 300x to 80x GiveWell, driven by (among other things): (a) taking into accounting speeding up effects, (b) downgrading confidence in advocacy success rates, (c) updating for more conservative costing, and (d) doing GiveWell style epistemological discounts (e.g. taking into account a conservative null hypothesis prior, or discounting for publication bias/endogeneity/selection bias etc.)

As for what our priors should be with respect to whether a cause can really be 100x GiveWell - I would say there's a reasonable case for this, if: (a) One targets NCDs and other diseases that grow with economic growth (instead of being solved by countries getting richer, and improving sanitation/nutrition/healthcare systems etc). (b) There are good policy interventions available, because it really does matter that: (i) a government has enormous scale/impact; (ii) their spending is (counterfactually) relative to EA money that would have gone to AMF and the like; and (iii) policy tends to be sticky, and so the impact lasts in a way that distributing malaria nets or treating depression may not.

Hi Wayne,

You're right! I'm currently working on the intermediate report for diabetes, and one factor we're looking at that the shallow report did not cover is the speeding up effect, which we model by looking at the base rate from past data (i.e. country-years in which passage occurred, divided by total country-years). This definitely cuts into the headline cost-effectiveness estimate.

On a related note, one issue, I think, is whether we think of tax policy success as counterfactually mutually exclusive, or as additive. (A) For the former, as you say, the idea is that the tax would have occurred anyway. (B) For the latter, the idea is that the tax an EA or EA-funded advocacy organization pushes shifts upwards the tax over time curve (i.e. what the tax rate is over time; presumably this slopes upwards, as countries get stricter). In short, we're having a counterfactual effect because the next round of tax increases don't replace so much as add on to what we've achieved, and our actions ensure that the tax rate at any one point in time is systematically higher than it otherwise would have been.

 I think reality is a mix between both both viewpoints (A) & (B) - success means draining the political capital to do more in the short to medium term, but you're probably also ensuring that the tax rate is systematically higher going forward. In practice, I tend to model using (A), just to be conservative

Hi Sophie,

Thanks for the feedback on RSL/WASSH - that's really useful, and something I'll definitely factor in at the next research stage!

On governmental/implementation costs - I definitely agree that this should be factored in, but just to clarify, the analysis does take this into account, using WHO estimates of the per capita cost of implementing WHO Best Buy policies on sodium (USD 0.03) and on alcohol taxes (USD 0.004, as an imperfect proxy for sodium taxes. Multiplying this with the average country's population size, as well as the expected years in which implementation will occur (as a function of various discounts like policy reversal rates etc), we get the long-term cost of implementation in the average country.

To this, two discounts are applied: (a) a discoutn for the probability that advocacy succeeds (such that the implementation costs are incurred at all); and (b) a discount for government spending in the average country being far less counterfactually valuable than EA funding which would otherwise have gone to top GiveWell charities or the like. In my experience, discount (b) tends to mean that governmental costs aren't as significant a factor as they would theoretically be - but it does depend on the country of implementation (e.g. its fantastically cost effective to get rich world governments to do stuff given the counterfactuals; less so if you're draining sub-Saharan African governments' budgets).

Thanks for the clarifications, Michael, especially on non-reporters and non-response bias!

On base rates, my prior is that people who self select into GWWC pledges are naturally altruistic and so it's right (as GWWC does) to use the more conservative estimate - but against this is a concern that self-reported counterfactual donation isn't that accurate.

It's really great that GWWC noted the issue of social desirability bias, but I suspect it works to overestimate counterfactual giving tendencies (rather than overestimating GWWC's impact), since the desire to look generous almost certainly outweighs the desire to please GWWC (see research on donor overreporting: https://researchportal.bath.ac.uk/en/publications/dealing-with-social-desirability-bias-an-application-to-charitabl). I don't have a good solution to this, insofar as standard list experiments aren't great for dealing with quantification as opposed to yes/no answers - would be interested in hearing how your team plans to deal with this!

Hi Sjir, after quickly reading through the relevant parts in the detailed report, and following your conversation with Jeff, can I clarify that:

(1) Non-reporters are classified as donating $0, so that when you calculate the average donation per cohort it's basically (sum of what reporters donate)/(cohort size including both reporters and non-reporters)?

(2) And the upshot of (1) would be that the average cohort donation each year is in fact an underestimate insofar as some nonreporters are in fact donation? I can speak for myself, at least - I'm part of the 2014 cohort, and am obviously still donating, but I have never reported anything to GWWC since that's a hassle.

(3) Nonresponse bias is really hard to get around (as political science and polling is showing us these last few years), but you could probably try to get around this by either relying on the empirical literature on pledge attrition (e.g. https://www.sciencedirect.com/science/article/pii/S0167268122002992) or else just making a concerted push to reach out to non-reporters, and find the proportion who are still giving (though that in turn will also be subject to nonresponse bias, insofar as non-reporters twice over are different from non-reporters who respond to the concerted push (call them semi-reporters), and you would want to apply a further discount to your findings, perhaps based on the headline reporter/semi-reporter difference if you assume that reporter/semi-reporter donation difference = difference in semi-reporter/total non-reporter difference.

(4) My other big main concern beyond nonresponse bias is just counterfactuals, but looking at the report it's clearly been very well-thought out, and I'm really impressed at the thoroughness and robustness. All I would add is that I would probably put more weight on the population base rate (even if you have to source US numbers), and even revise that downwards given the pool that EA draws from is distinctively non-religious and we have lower charitable donation rates.

That's an interesting perspective! You're right that the scientific experts would disagree strongly on this, and to cite one of them: "While there is some controversy over the idea of a U or J-shaped curve for salt intake and cardiovascular outcomes, the more robust studies show that these use faulty evidence." Another expert adds to this, "In healthy adults, sodium is needed to sustain BP, but we don't observe a J-curve normally: there is sodium in all food, and the kidney is a great engine at holding on to sodium in low sodium settings, such that lower BP is basically almost always better)."

I also don't think it's accurate to say that the evidence is observational. (a) Aburto et al's (2013) meta-analysis of RCTs and prospective cohort studies shows that a reduction in sodium intake significantly reduced resting systolic blood pressure by 3.39 mm Hg; while Ettehad et al's meta-analysis entirely of RCTs shows that every 10 mm Hg reduction in systolic blood pressure significantly reduced the risk of major cardiovascular disease events (relative risk: 0.8), coronary heart disease (relative risk: 0.83), stroke (relative risk: 0.73) and heart failure (relative risk: 0.73), leading to a significant 13% reduction in all-cause mortality). (b) Then there is the Strazzullo et al meta-analysis of both RCTs and population studies, showing that additional sodium consumption of 1880 mg/day leads to greater risk of CVD (relative risk: 1.14).

On the sweating issue (and hence the associated concerns about exercise and whether people in hot climates will be hurt) - I don't think this is an unreasonable fear a prior, but the Lucko et al meta-analysis of RCTs suggests that 93% of dietary sodium is excreted via urine, so basically that should anchor our expectations that this isn't going to be a significant way in which sodium is lost (let alone to such an extent that it has bad health consequences).

Load more