KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
2933 karmaJoined Apr 2021Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
366

I think a neutral world is much better than extinction, and most dystopias are also preferable to human extinction. The latter is debatable but the former seems clear? What do you imagine by a neutral world?

I donate 5% of my income and I'm gradually escalating (only two years out of college). I don't plan to pledge because I don't see the need for a commitment device. I don't like the commitment framing - I think it takes a cudgel to an altruistic motivation that to me feels natural right now. Put differently:

I ran a different version of the [drowning child] thought experiment without language around obligation: “Imagine that – somehow – the universe has deemed me unalterably Good regardless of whether I help. Do I still want to rescue the child?”

...

After rewording Singer’s thought experiment, it dawned on me that I’d been using the frame of an “obligation” as a psychological whip to get myself to do what I already wanted to do. Weird.

When I went looking for the force or deity outside of myself that held the whip, there was nothing there. I was the one holding the whip. This was a revelation. The thing underlying my moral “obligation” came from me, my own mind. This underlying thing was actually a type of desire. It turned out that I wanted to help suffering people. I wanted to be in service of a beautiful world. Hm.

Suddenly the words of my moral vocabulary took on new meanings. Was I obligated to save the drowning child? Did I have a responsibility or moral duty? Was it something that one should or ought do? This language seemed misleading. These words seemed to replace my natural impulse to help with an artificial demand imposed by…nothing and no one.

The natural concern is that I'm naively assuming that my future self will share these values. That is correct.

If you are truly risk neutral, ruin games are good. The long term outcome is not worse, because the 99% of times when the world is destroyed are outweighed by the fact that it's so much better 1% of the time. If you believe in risk neutrality as a normative stance, then you should be okay with that.

Put another way; if someone offers you a 99% bet for 1000x your money with a 1% chance to lose it all, you might want to take it once or twice. You don't have to choose between "never take it" and "take it forever". But if you find the idea of sequence dependence to be desirable in this situation, then you shouldn't be risk neutral.

Deciding to apply risk aversion in some cases and risk neutrality in others is not special to ergodicity either. If you have a risk averse utility function the curvature increases with the level of value. I claim that for "small" values of lives at stake, my utility function is only slightly curved, so it's approximately linear and risk neutrality describes my optimal choice better. However, for "large" values, the curvature dominates and risk neutrality fails.

Like most defenses of ergodicity economics that I have seen, this is just an argument against risk neutral utility.

Edit: I never defined risk neutrality. Expected utility theory says that people maximize the expectation of their utility function, . Risk neutrality means that is linear, so that maximizing expected utility is the exact same thing as maximizing expected value of the outcome . That is, . However, this is not true in general. If is concave, meaning that it satisfies diminishing marginal utility, then - in other words, the expected utility of a bet is less than the utility of its expected value as a sure thing. This is known as risk aversion.

Consider for instance whether you are willing to take the following gamble: you're offered to press a button with a 51% chance of doubling the world's happiness but a 49% chance of ending it. This problem, also known as Thomas Hurka's St Petersburg Paradox, highlights the following dilemma: Maximizing expected utility suggests you should press it, as it promises a net positive outcome.

No. A risk neutral agent would press the button because they are maximizing expected happiness. A risk averse agent will get less utility from the doubled happiness than the utility they would lose from losing all of the existing happiness. For example, if your utility function is and current happiness is 1, then the expected utility of this bet is . Whereas the current utility is which is superior to the bet.

When you are risk averse, your current level of happiness/income determines whether a bet is optimal. This is a simple and natural way to incorporate the sequence-dependence that you emphasize. After winning a few bets, your income/happiness have grown so much that the value of marginal income/happiness is much lower than what you already have, so losing it is not worthwhile. Expected utility theory is totally compatible with this; no ergodicity economics needed to resolve this puzzle.

Now, risk aversion is unappealing to some utilitarians because it implies that there is diminishing value to saving lives, which is its own bullet to bite. But any framework that takes the current state of the world into account when deciding whether a bet is worthwhile has to bite that bullet, so it's not like ergodicity economics is an improvement in that regard.

Good post, more detailed thoughts later, but one nitpick:

As far as I can tell, the deworming project being “one of the most successful RCTs to date” is just wrong. There is widespread disagreement about what we can conclude about deworming from the available evidence, with many respected academics saying that deworming has no effect on education at all. Many RCTs show a much larger effect.

I don't think "most successful RCT" is supposed to mean "most effective intervention" but rather "most influential RCT"; deworming has been picked up by a bunch of NGOs and governments after Miguel and Kremer, plausibly because of that study.

(Conflict note: I know and like Ted Miguel)

I upvoted because I liked the story, but this feels like a pretty glaring strawman of "mathematical solutions to multifaceted human problems". I can't imagine any reasonable solution/intervention to which this critique would apply.

Came here to comment this. It's the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it's an abstraction.

There were a bunch, most prominently IRRI in the Philippines - Table 1 in this paper lists all of them.

Interesting, then I figure it probably substituted for meat consumption at restaurants rather than meat consumption at home. Regardless, I think it's mostly valid to use increase in plant based consumption as a proxy for a reduction in meat consumption since total food consumption is relatively stable.

Load more