Bio

Participation
4

I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling and general org-boosting to support policy advocacy for market-shaping tools to incentivise innovation and ensure access to antibiotics to help combat AMR

I previously did AIM's Research Training Program, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

How others can help me

I'm looking for "decision guidance"-type roles e.g. applied prioritization research.

How I can help others

Do reach out if you think any of the above piques your interest :)

Comments
135

Topic contributions
3

Thanks for sharing the article. I found these paragraphs particularly sobering:

In the nearer term, the situation is bleaker. The advocacy group 1DaySooner has been pushing a goal of vaccinating 50 million children this year and the next (2024 and 2025). That takes 200 million doses, which Serum claims it can produce. But Gavi only projects a total of only 2 million immunized children from 2021 and 2025, or 25 times fewer children than theoretically could be vaccinated with more funding. ...

Funding the standard vaccines is great. But every 100,000 kids vaccinated with R21 means 629 fewer kids dead from malaria. The 48 million kid gap between 1DaySooner's vaccination goal and Gavi's current plans for this year and next, then, represents about 300,000 additional dead kids. Those are lives we can save with sufficient investment.

As Jacob Trefethen, a funder of global health research at Open Philanthropy, recently asked, “Are we, as a country, as a world, really going to let money be the blocker to kids getting a malaria vaccine?”

Saving 300k children's lives would reduce under-5 malaria deaths worldwide by a staggering ~70%

The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of

I've wondered about this as well. Scott Alexander's mistake #57 seems like a relevant starting point: 

57: (12/4/23) In In Continued Defense Of Effective Altruism, I said of EA’s failures (primarily SBF) that “I’m not sure they cancel out the effect of saving one life, let alone 200,000”. A friend convinced me that this was an unfair exaggeration of the point I wanted to make. There are purported exchange rates between money and lives, destroying $5 - $10 billion in value is pretty bad by all of them, and there are knock-on effects on social trust from fraud that suggest its negative effects should be valued even higher. I regret this sentence and no longer stand by it.

One guess as to how Scott's link (US VSL) might underestimate the value destruction is that in Nov 2021, GiveWell aimed to direct ~$1bn annually by 2025, and the year after they revised downward their future funding projections due in part to their main donor Open Phil revising downward its planned 2022 allocation to GW due to a ~40% reduction in their asset base since the end of last year from "the recent stock market decline" changing their portfolio allocation, in particular more than proportionally reducing their GW allocation (partly offset by GW overachieving by ~40% RFMF-wise in finding cost-effective opportunities). My low-confidence guess is that "the recent stock market decline" has quite a bit to do with FTX. It seems likely to me that the NPV of the projected funding reduction to GW is >$1bn over the next say decade at ~5k per life saved (~1,000x the US VSL Scott linked to), or >200k lives that could have been saved but weren't, most of them children under 5. (This galls me, to be honest, so I'd like to be told my reasoning is wrong or something.)

That's one guess; I'm sure there are more I'm missing that's still BOTEC-able, let alone the knock-on effects on social trust from fraud Scott mentioned.

To the OP: I think it's worth reflecting on the warning that maximization is perilous

> For an intervention to be a longtermist priority, there needs to be some kind of concrete story for how it improves the long-term future.

I disagree with this. With existential risk from unaligned AI, I don't think anyone has ever told a very clear story about how AI will actually get misaligned, get loose, and kill everyone.

When I read the passage you quoted I thought of e.g. Critch's description of RAAPs and Christiano's what failure looks like, both of which seem pretty detailed to me without necessarily fitting the "AI gets misaligned, gets loose and kills everyone" meme; both Critch and Christiano seem to me to be explicitly pushing back against consideration of only that meme, and Critch in particular thinks work in this area is ~neglected (as of 2021, I haven't kept up with goings-on). I suppose Gwern's writeup comes closest to your description, and I can't imagine it being more concrete; curious to hear if you have a different reaction.

I think your confounders are on the money.

You might be interested in Elizabeth's Change my mind: Veganism entails trade-offs, and health is one of the axes. I especially appreciated her long list of cruxes, the pointer to Faunalytics' study of nutritional issues in ex-vegans & ex-vegetarians, and her analysis of that study attempting to adjust for its limitations which basically strengthens its findings (to my reading). 

I'd also guess, without much evidence, that there's a halo effect-like thing going on where if someone really care about averting animal suffering a vegan diet starts seeming more virtuous, which spills over into their assessment of its health benefits.

Regarding "To be a good EA, in the sense that it is conceived of by most EAs, you must enjoy your life to some degree. This is because living one’s life is rarely an exclusively moral decision", you may also like Tyler Alterman's reflections, in particular this paragraph:

Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.

Regarding "Utilitarianism is the most demanding moral framework I’m aware of, in the modern world", I think Scott's distinction between axiology, morality and law is useful. Quoting liberally from that essay:

These three concepts are pretty similar; they’re all about some vague sense of what is or isn’t desirable. But most societies stop short of making them exactly the same. Only the purest act-utilitarianesque consequentialists say that axiology exactly equals morality, and I’m not sure there is anybody quite that pure. And only the harshest of Puritans try to legislate the state law to be exactly identical to the moral one. To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history.

These concepts stay separate because they each make different compromises between goodness, implementation, and coordination. ...

Axiology is just our beliefs about what is good. If you defy axiology, you make the world worse.

At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities. ...

Law is an attempt to formalize the complicated demands of morality, in order to make them implementable by a state with police officers and law courts. ...

In a healthy situation, each of these systems reinforces and promotes the other. Morality helps you implement axiology from your limited human perspective, but also helps prevent you from feeling guilty for not being God and not being able to save everybody. The law helps enforce the most important moral and axiological rules but also leaves people free enough to use their own best judgment on how to pursue the others. And axiology and morality help resolve disputes about what the law should be, and then lend the support of the community, the church, and the individual conscience in keeping people law-abiding.

In these healthy situations, the universally-agreed priority is that law trumps morality, and morality trumps axiology. ...

In unhealthy situations, you can get all sorts of weird conflicts. Most “moral dilemmas” are philosophers trying to create perverse situations where axiology and morality give opposite answers. For example, the fat man version of the trolley problem sets axiology (“it’s obviously better to have a world where one person dies than a world where five people die”) against morality (“it’s a useful rule that people generally shouldn’t push other people to their deaths”). And when morality and state law disagree, you get various acts of civil disobedience, from people hiding Jews from the Nazis all the way down to Kentucky clerks refusing to perform gay marriages.

Epistemic status: public attempt at self-deconfusion & not just stopping at knee-jerk skepticism

The recently published Cost-effectiveness of interventions for HIV/AIDS, malaria, syphilis, and tuberculosis in 128 countries: a meta-regression analysis (so recent it's listed as being published next month), in my understanding, aims to fill country-specific gaps in CEAs for all interventions in all countries for HIV/AIDS, malaria, syphilis, and tuberculosis, to help national decision-makers allocate resources effectively – to a first approximation I think of it as "like the DCP3 but at country granularity and for Global Fund-focused programs". They do this by predicting ICERs, IQRs, and 95% UIs in US$/DALY using the meta-regression parameters obtained from analysing ICERs published for these interventions (more here). 

AFAICT their methodology and execution seem superb, so I was keen to see their results: 

Figure thumbnail gr3a

Antenatal syphilis screening ranks as the lowest median ICER in 81 (63%) of 128 countries, with median ICERs ranging from $3 (IQR 2–4) per DALY averted in Equatorial Guinea to $3473 (2244–5222) in Ukraine.

At risk of being overly skeptical: $3 per DALY averted is >30x better than Open Phil's 1,000x bar of $100 per DALY which is roughly around GW top charity level which OP have said are hard to beat, especially for a direct intervention like antenatal syphilis screening. It makes me wonder how much credence to put in the study's findings for actual resource allocation decisions (esp. Figure 4 ranking top interventions at country granularity). Also:

  • Specifically re: antenatal syphilis screening, CE/AIM's report on screening + treating antenatal syphilis estimates $81 per DALY; I'm hard-pressed to believe that removing treatment improves cost-eff >1 OOM  
  • I'm reminded of the time GW found 5 separate spreadsheet errors in a DCP2 estimate of soil-transmitted-helminth (STH) treatment that together misleadingly 'improved' its cost-effectiveness ~100-fold from $326.43 per DALY (correct output) to just $3.41 (wrong, and coincidentally in the ballpark of the estimate above that triggered my skepticism) 

So how should I think about and use their findings given what seems like reasonable grounds for skepticism, if I'm primarily interested in helping decision-makers help people better? Scattered thoughts to defend the study / push back on my nitpicking above:

  • even if imperfect – and I'm not confident in my skepticism above – they clearly improve substantially upon the previous state of affairs (CEA gaps everywhere at country-disease-intervention level granularity; expert opinion not lending itself to country-specific predictions; case-by-case methods often being unsuccessful)
  • their recommendations seem reasonably hedged, not naively maximalist: they include 95% uncertainty intervals; they clearly say "cost-effectiveness... should not be the only criterion... [consider also] enhancing equity and providing financial risk protection
  • even a naively maximalist recommendation ("first fund lowest-ICER intervention, then 2nd-lowest, ... until funds run out") doesn't seem unreasonable in this context – essentially countries would end up funding more antenatal syphilis screening, intermittent preventive treatment of malaria in pregnant women and infants, and chemotherapy for drug-susceptible TB (just from eyeballing Figure 4)
  • I interpret what they're trying to do as not so much "here are the ICER league tables, use them", but shifting decision-makers' approach to resource allocation from needing a single threshold for all healthcare funding decisions to (quoting them) "ICERs ranked in country-specific league tables", and in the long run this perspective shift seems useful to "bake into" decision-making processes, even if the specific figures in this specific study aren't necessarily the most accurate and shouldn't be taken at face value

That said, I do wonder if the authors could have done a bit better, like 

  • cautioning against naively taking the best cost-eff estimates at face value, instead of suggesting "Funds could be first spent on the intervention that has the lowest ICER. Following that, other interventions could be funded in order of their ICER rankings, as long as there are available funds
  • spot-checking some of (not all) the top cost-eff ICERs that went into their meta-regression analysis to get a sense of their credibility, especially those which feed into their main recommendations, like GW did above with the DCP2 estimate for STH treatment 
  • extracting qualitative proxies for decision-maker guidance from an analysis of the main drivers behind the substantial ranking differences in intervention ICERs across economic and epidemiological contexts (eg "we should expect antenatal syphilis screening to be substantially less cost-effective in our context due to factors XYZ, let's look at other interventions instead" – what would a short useful list of XYZ look like?), instead of just saying "we found the rankings differ substantially"

I hadn't, thanks for the pointer Pablo.

Curious what people think of Gwern Branwen's take that our moral circle has historically narrowed as well, not just expanded (so contra Singer), so we should probably just call it a shifting circle. His summary:

The “expanding circle” historical thesis ignores all instances in which modern ethics narrowed the set of beings to be morally regarded, often backing its exclusion by asserting their non-existence, and thus assumes its conclusion: where the circle is expanded, it’s highlighted as moral ‘progress’, and where it is narrowed, what is outside is simply defined away. 

When one compares modern with ancient society, the religious differences are striking: almost every single supernatural entity (place, personage, or force) has been excluded from the circle of moral concern, where they used to be huge parts of the circle and one could almost say the entire circle. Further examples include estates, houses, fetuses, prisoners, and graves.

(I admittedly don't find his examples all that persuasive, probably because I'm already biased to only consider beings that can feel pleasure and suffering.)

What's the "so what"? Gwern:

One of the most difficult aspects of any theory of moral progress is explaining why moral progress happens when it does, in such apparently random non-linear jumps. (Historical economics has a similar problem with the Industrial Revolution & Great Divergence.) These jumps do not seem to correspond to simply how many philosophers are thinking about ethics. 

As we have already seen, the straightforward picture of ever more inclusive ethics relies on cherry-picking if it covers more than, say, the past 5 centuries; and if we are honest enough to say that moral progress isn’t clear before then, we face the new question of explaining why things changed then and not at any point previous in the 2500 years of Western philosophy, which included many great figures who worked hard on moral philosophy such as Plato or Aristotle. 

It is also troubling how much morality & religion seems to be correlated with biological factors. Even if we do not go as far as Julian Jaynes’s9 theories of gods as auditory hallucinations, there are still many curious correlations floating around.

Nicolaj correct me if I'm wrong – I think it's derived here in the OP:

(Quantitatively it would be captured by  when combined with the improving circumstances component. That comes from solving the last equation in Rethink Priorities’ 2023 report for  given  and —i.e., assuming that the compounding non-monetary benefits factor also reflects diminishing marginal utility from income doublings. As a result I'm assuming the discount rate reflects  for the remainder of the post.)

That last equation on pg 48 is 𝑟_𝐺𝑖𝑣𝑒𝑊𝑒𝑙𝑙 = (1 + δ)(1 + 𝑔)^(η−1) − 1. δ is the pure time preference rate, for which GiveWell's choice is δ = 0%; pg 30 in the RP report above summarizes the reasoning behind this choice. 

Maybe 

Other scattered remarks

Load more