Yeah I don't quite understand that line of argument. Naively, it seems like a bait-and-switch, not unlike "journalists don't write their own terrible headlines."
Possibly a tangential point, but lots of people in many EA communities think that accelerating economic growth in the US is a top use of funds.
Hmm I think the link does not support your claim.
Why would value be disributed over some suitable measure of world-states in a way that can be described as a power law specifically (vs some other functional form where the most valuable states are rare)?
I agree with this. I'm probably being too much of a pedant, but it's a slight detriment to our broader epistemic community that people use "power law" as a shorthand for "heavy-tailed distribution" or just "many OOMs of difference between best and worst/median outcomes." I think it makes our thinking a bit less clear when we try to translate back and forth between intuitions and math.
Thanks a lot for this post! I tried addressing this earlier by exploring "extinction" vs "doom" vs "not utopia," but your writing here is clearer, more precise and more detailed. One alternative framing I have for describing the "power laws of value," hypothesis as a contrast of your 14-word summary:
"Utopia" by the lights of one axiology or moral framework might be close to worthless under other moral frameworks, assuming an additive axiology.
It's 23 words and has more jargon, but I think it describes my own confusions better. In particular, I don't think you need to believe in "weird stuff" to get to many OOMs of difference between "best possible future" and "realistic future", unless additive/linear axiology itself is weird.
As one simple illustration, humanity can either be correct or incorrect in colonizing the stars with biological bodies instead of digital emulations. Either way, if you're wrong you lose many OOMs of value
To me, "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI's old merge-and-assist clause, which indicates that they'd be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.
Does anybody know if the Trump EO on instituting "most favored nation" guarantees on drugs sold in the US will affect prices in developing countries or just rich industrialized ones?
The text of the EO implies that it's to address imbalances between the US and other developed countries (AI summary).
However, as stated, "most favored nation" would seem to imply that the US will only purchase drugs at the lowest prices available anywhere in the world.
Taken literally, this ~prices out poorer countries from any drugs simultaneously sold in the US and elsewhere.
To be clear, I think "MFN within "developed" countries" is still quite bad (the US has >5x the gdp per capita of the world bank cutoff for "high income countries" so in practice many people today will still be priced out, in the long run lower pharma profits -> less new drug discovery -> millions or more will die in the US and elsewhere from diseases like cancers and Alzheimer's that are not preventable today). But there's a limit to the badness if it's just within developed countries whereas millions if not more people will die in the short term if the rest of the world is priced out of drugs if they can't afford them at US prices.