KT

Karthik Tadepalli

Economics PhD // Consulting Researcher @ UC Berkeley // GiveWell
4096 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
483

Yeah I was referring more to whether it can bring new ways of spending money to improve the world. There will be new market failures to solve, new sorts of technology that society could gain from accelerating, new ways to get traction on old problems

Similar to Ollie's answer, I don't think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I'd love to see someone carefully think through what those opportunities might be.

A history of ITRI, Taiwan's national electronics R&D institute. It was established in 1973, when Taiwan's income was less than Pakistan's income today. Yet it was single-handedly responsible for the rise of Taiwan's electronics industry, spinning out UMC, MediaTek and most notably TSMC. To give you a sense of how insane this is, imagine that Bangladesh announced today that they were going to start doing frontier AI R&D, and in 2045 they were the leaders in AI. ITRI is arguably the most successful development initiative in history, but I've never seen it brought up in either the metascience/progress community or the global dev community.

I didn't; my focus here is on orienting people towards growth theory, not empirics.

I don't understand this view. Would they want their initiative to be run by incompetent people? If not, in what world do they not train their staff? The fact that they also tacked on an expectation that they would not migrate does not mean that expectation was pivotal in their decision.

I think Jason is saying that the "support to emigrate" was limited to recommendations.

Yes, continuity doesn't rule out St Petersburg paradoxes. But i don't see how unbounded utility leads to a contradiction. can you demonstrate it?

Continuity doesn't imply your utility function is bounded, just that it never takes on the value "infinity", ie for any value it takes on, there are higher and lower values that can be averaged to reach that value.

Maximizing expected utility is not the same as maximizing expected value. The latter assumes risk neutrality, but vNM is totally consistent with maximizing expected utility under arbitrary levels of risk aversion, meaning that it doesn't provide support for your view expressed elsewhere that risk aversion is inconsistent with vNM.

The key point is that there is a subtle difference between maximizing a linear combination of outcomes, vs maximizing a linear combination of some transformation of outcomes. That transformation can be arbitrarily concave, such that we would end up making a risk averse decision.

Load more