Karthik Tadepalli

Economics PhD @ UC Berkeley
Pursuing a doctoral degree (e.g. PhD)
690Joined Apr 2021karthiktadepalli.com

Comments
136

The Parable of the Boy Who Cried 5% Chance of Wolf

Related: A Failure, But Not of Prediction. The best case for x-risk reduction I've ever read, and it doesn't even mention x-risks once.

Internationalism is a key value in EA

But the idea that a person in another country is equally worth caring about as a person in your country is a necessary premise for believing that that's a way to do more good per dollar. Most people implicitly would rather help one homeless person in their city than 100 homeless people in another country.

How to Talk to Lefties in Your Intro Fellowship

Again, it informs only how they trade off health and income. The main point of DALY/QALYs is to measure health effects. And in that regard, EA grantmakers use off-the-shelf estimates of QALYs rather than calculating them. Even if they were to calculate them, the IDinsight study does not have anything in it that would be used to calculate QALYs, it focuses solely on income vs health tradeoffs.

How to Talk to Lefties in Your Intro Fellowship

That's also simply not true because EAs use off-the-shelf DALY/QALY estimates from other organizations all the time. And this is only about health vs income tradeoffs, not health measurement, which is what QALY/DALY estimates actually do.

Edit: as a concrete example, Open Phil's South Asian air quality report takes its DALY estimates from the State of Global Air report, which is not based on any beneficiary surveys.

How to Talk to Lefties in Your Intro Fellowship

That seems a bit misleading since the IDinsight study, while excellent, is not actually the basis for QALY estimates as used in e.g. the Global Burden of Disease report. My understanding is that it informs the way givewell and open philanthropy trade off health vs income, but nothing more than that.

To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing

I am not convinced about WELLBYs for a few reasons that I might comment later, but my primary response to this post is admiration at HLI for being so persistent and thorough about the value of SWB measures. I have a very strong intuition that SWB measures are invalid, but each analysis that you all do reduces that intuition little by little. It's really nice to see a really ambitious project to change one of the most fundamental tools of EA.

Common-sense cases where "hypothetical future people" matter

Even simpler counterexample: Josh's view would preclude climate change as being important at all. Josh probably does not believe climate change is irrelevant just because it will mostly harm people in a few decades.

I suspect what's unarticulated is that Josh doesn't believe in lives in the far future, but hasn't explained why lives 1000 years from now are less important than lives 100 years from now. I sympathize because I have the same intuition. But it's probably wrong.

Economic losers: SoGive's review of deworming, and why we're less positive than GiveWell

So I went over the additional documents and I owe you an apology for being dismissive. There is indeed more to the analysis than I thought, and it was flippant to suggest that your or GiveWell's replicability adjustment was just "this number looks too high" and thus incorporates this. Having gone through the replicability adjustment document, I think it makes a lot more sense than I gave it credit for.

What I couldn't gather from the document was where exactly you differed from GiveWell. Is it only in the economic losers weighting? Were your components from weight gain, years of schooling and cognition the same as GiveWell's? In the sheet where you calculate the replicability adjustment, there is no factoring of economic losers as far as I can tell, so in order to arrive at the new replicability adjustment you must have had to differ from GiveWell in the mechanism adjustment, right?

Load More