UR

Uncorrelated Returns

27 karmaJoined Aug 2022

Comments
5

  1. Could you explain why you think China will not rise successfully, without deferring to experts (unless these experts have some kind of testable prediction record)? The Chinese government has managed to bring more people out of poverty in the last 50 years than any other government : the pace of economic growth is staggering. Chinese GDP per capita was $100/person in 1970: today it stands at $10k/person. Profit is not a dirty word in China, whereas growth and profit have become political in the US (i.e, strong cultural headwinds downstream of economic growth in China). China's population is 3 times larger than that of the United States. The economy expands to more and more industries versus being simply a low cost of labor manufacturing destination like the US wants to paint it, resulting in greater technological capabilities. For example, IMO, the most exciting new industrial robotic arm companies are predominantly Chinese. China's geopolitical influence is growing faster than the US's, because China is strategic about its geopolitical interests, planning and executing over 10-20 year horizons: whereas the US seems to be constantly shuffling its State Department and other organs, executing upon 2-3 year long plans at best. These are most of the relevant factors I was using to make the claim: but of course it's very hard to predict geopolitical events. At minimum, we should take the possibility seriously.
  2. Weapons technology may incorporate biology, AI, and nuclear, but judging by the history of military technology during WW1 and WW2, it's quite difficult to predict what the most cost effective killing machines will look like. My bet is on cheap killer robots rather than bio/AI/nuclear. Killer robots could kill off most remaining humans, for example, because that would be significantly more cost effective and thorough than bioweapons/nuclear weapons. This is not factored into bio/AI/nuclear X-risk. There's a big difference between accidental technological failure, and bringing governments' resources to bear on killing the maximal number of people: it's very hard to predict how good governments will be at that, because they are live players and can push technology forward very quickly (and historically have done that). For example, did you know that most engineering research at MIT during WW2 was towards the War Effort? Imagine if most smart engineers were pushed towards weapons research. That level of resources has been missing for 50+ years, and I think it's very hard to predict how bad things could get - but my guess is they could get very, very bad.
  3. Thanks for the feedback, and sorry for contributing to epistemic pollution around the topic. Upon reading this again, I agree it's not nuanced enough, but still stand by my core argument: US elite culture is shifting towards a confrontational lens on China, and away from a collaborative lens, and that if this trend continues, it has huge implications for humanity in the next 20-30 years. Separately, I'm not persuaded by most infohazard arguments, so I have a hard time understanding your perspective on why this would be risky/sensitive to discuss in public: could you please explain?

Leaders of countries and elites (military leaders, political leaders, ...) decide whether to go to war or not. The claimed reasons may or may not have anything to do with the actual reasons: that is just a tool to get to some desired outcome.

Most wars suck for both parties, but groups of people can easily all end up believing the same thing because of excessive deference, resulting in poor decision making and e.g, a war. 

Imagine two groups of teenagers quarrelling on the high school playground: if they do decide to fight, was it for strategic / ideological reasons? Usually it's the result of escalating rhetoric or one of the parties being unusually aggressive. This is I think a good model for war as well.

US and Chinese elites today look like a bunch of teenagers quarrelling. It has little to do with reality (ideology/strategy), and much more to do with the social dynamics of the situation, I think.

Hmm, I don't think I agree.

 

I think the most powerful form of compounding in the EA movement context is of people and reputation, which are upstream of money and influence. Great people + great reputation -> more great people + more great reputation. 

 

Most endeavours over long periods of time have some geometric/compounding aspects, and some arithmetic aspects.

 

But usually, I think compounding is more important: that's how you avoid ruin (which isn't a big deal outside of compounding unless you use log utility which is equivalent to caring about compounding), and that's how you get really big returns.

 

Successful countries weren't built in a day. Successful charities weren't built in a day. Many things have to go right, and some things must not happen, for a movement to succeed. That's essentially just compounding. 

The Founders Pledge climate fund's stated objective is to "sustainably reach net-zero emissions globally". 

A great example of an insidious correlation of this fund: what about funding work which helps people adapt to climate change, instead of mitigating it?  

For example, can we invent cheap air conditioning units which anyone in the world can afford to buy, to keep humans and crops cool as they migrate away from current coastal areas? 


 



EDIT: let me try to be more clear, since this answer was downvoted twice -- upon seeing the fund, I asked myself, "what belief seems to be shared by all of these investments"? That then lead me to the above thought. This is a much better intuition pump than "what should this fund be uncertain about"? I think that's the difference between uncertainty and insidious correlation, and I think you're interpreting insidious correlation as another name for uncertainty.

Very interesting - what has FEM's cost effectiveness looked like so far, and how does FEM see that evolving over time?