I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Maximizing expected utility is not the same as maximizing expected value. The latter assumes risk neutrality, but vNM is totally consistent with maximizing expected utility under arbitrary levels of risk aversion, meaning that it doesn't provide support for your view expressed elsewhere that risk aversion is inconsistent with vNM.
The key point is that there is a subtle difference between maximizing a linear combination of outcomes, vs maximizing a linear combination of some transformation of outcomes. That transformation can be arbitrarily concave, such that we would end up making a risk averse decision.
Apt time to plug an analysis I did a while ago of paying farmers in India not to burn their crop stubble. It's primarily a (pretty effective) air quality intervention, but I pulled together some numbers that suggest it also averts GHGs at $36/ton of CO2e, which would probably satisfy a lot of climate funders!
I'm referring to why it doesn't get brought up by the opposers of Trump tariffs, who clearly do not think that trade is zero sum (unless they somehow think that tariffs benefit foreigners and hurt Americans). The liberal American opposition to tariffs is totally silent on their effects abroad.
Tariffs on manufactured goods are likely incident on manufacturing workers, which is a way in which they can increase poverty, though probably not extreme $1/day poverty. Regardless the general point goes through, that they will reduce the incomes of a generally not-well-off group of people.
Love this analysis and I've been wondering why no one talks about it. There are two motivations that makes sense to me for why analysts don't talk about this:
It bothers me to not be able to distinguish between these.
If you haven't read it, this article is a convincing argument for why containing harmful policies by the West should be a main focus for development policy.
Shooting from the hip here - if the future of AI progress is inference-time scaling, that seems inherently "safer"/less prone to power-seeking. Expensive inference means that a model is harder to reproduce (e.g. can't just upload itself somewhere else, because without heavy compute its new version is relatively impotent) and harder for rogue actors to exploit (since they will also need to secure compute for every action they make it do).
If this is true, it suggests that AI safety could be advanced by capabilities research into AI architecture that can be more powerful yet also more constrained in individual computations. So is it true?
I don't understand this view. Would they want their initiative to be run by incompetent people? If not, in what world do they not train their staff? The fact that they also tacked on an expectation that they would not migrate does not mean that expectation was pivotal in their decision.