J

JoelMcGuire

2873 karmaJoined Jul 2020

Bio

I am a researcher at the Happier Lives Institute. In my work, I assess the cost-effectiveness of interventions in terms of subjective wellbeing.

Posts
19

Sorted by New

Comments
157

The prospect of a nuclear conflict is so terrifying I sometimes think we should be willing to pay almost any price to prevent such a possibility. 

But when I think of withdrawing support for Ukraine or Taiwan to reduce the likelihood of nuclear war, that doesn't seem right either -- as it'd signal that we could be threatened into any concession if nuclear threats were sufficiently credible.

How would you suggest policymakers navigate such terrible tradeoffs?

How much do you think the risk of nuclear war would increase over the century if Iran acquired nuclear weapons? And what measures, if any, do you think are appropriate to attempt to prevent this or other examples of nuclear proliferation?

“Thank you for the comment. There’s a lot here. Could you highlight what you think the main takeaway is? I don’t have time to dig into this at present, so any condensing would be appreciated. Thanks again for the time and effort.” ??

I believe that large tech companies are, on average, more efficient at converting talent into market cap value than small companies or startups are. They typically offer higher salaries, for one.

This may be true for market cap, but let's be careful when translating this to do-goodery. E.g., wages don't necessarily relate to productivity. Higher wages could also reflect higher rents, which seems plausibly self-reinforcing by drawing (and shelving) innovative talent from smaller firms. A quote from a recent paper by Akcigit and Goldschlag (2023) is suggestive:

"when an inventor is hired by an incumbent, compared to a young firm, their earnings increase by 12.6 percent and their innovative output declines by 6 to 11 percent."

I don't have a good grasp of the literature. Still, the impression I got hanging around economists interested in innovation during my PhD led me to believe the opposite: that smaller firms were more innovative than larger firms, and the increasing size of firms over the past few decades is a leading candidate for explaining the decline in productivity and innovation.

Speaking from my own experience working in a tiny research organisation, I wish I could have started as a researcher with the structure and guidance of a larger organization, but I really doubt I'd have pursued as important research if we hadn't tried to challenge other, larger organizations. Do you feel differently with QURI?

I don't think this is right- "Russia" doesn't make actions, Vladimir Putin does; Putin is 70, so he seems unlikely to be in power once Russia has recovered from the current war; there's some evidence that other Russian elites didn't actively want the war, so I don't think it's right to generalize to "Russia".

Even if it was true that many elites were anti-war before the invasion, I think the war has probably accelerated a preexisting process of ideological purification. So even when Putin kicks the can, I think the elites will be just as likely to say "We didn't go far enough" than "We went too far". I expect at least some continuity in the willingness to go to war by Putin's successor. 

A US-China war would be fought almost entirely in the air and sea; Ukraine is fighting almost entirely on land. The weapons Ukraine has receive are mostly irrelevant for a potential US-China war; e.g. the Marines have already decided to stop using tanks entirely, and the US being capable of shipping the vast amounts of artillery ammunition being consumed in Ukraine to a combat zone would require the US-China war to already be essentially won.

Why would it be fought almost entirely in the air and sea? That sounds like a best or worst-case scenario, i.e., China isn't able to actually land or China achieves air and naval superiority around Taiwan. The advanced weapons systems Ukraine has received seem very relevant: Storm shadow, HIMARs, Abrams + Leoopard, Patriot, Javelin, etc. And shipping weapons doesn't seem to require the war to be essentially won, just that the US can achieve local air and naval superiority over part of Taiwan with a harbour. Complete dominance of the skies in a conflict is rare.

Weapons being sent to Ukraine are from drawdown stocks, which Taiwan itself hasn't previously been eligible to receive. Taiwan instead purchases new weapons, but there are many, many other countries purchasing similar types of weapons, and if the US were to become concerned, I'd expect it to prioritize both Ukraine and Taiwan over e.g. Saudi Arabia or Egypt.

My concern is that these US stocks seem to be regenerating very, very slowly.

Hah! Yeah, stepping back, I think these events are a distraction for most people. Especially if they worsen one's mental health. For me, reflecting on the war makes me feel so grateful and lucky to live where I do. 

Another reason to pay attention is when it seems like it could shortly and sharply affect the chances of catastrophe. At the beginning of the war, I kept asking myself, "At what probability of nuclear war should I: make a plan, consider switching jobs, move to Argentina, etc." But I think we've moved out of the scary zone for a while. 

Fair jabs, but the PRC-Taiwan comparison was because it was the clearest natural experiment that came to mind where different bits of a nation (shared language, culture, etc.) were somewhat randomly assigned to authoritarianism or pluralistic democracy. I'm sure you could make more comparisons with further statistical jiggery-pokery. 

The PRC-Taiwan comparison is also because, imagining we want to think of things in terms of life satisfaction, it's not clear there'd be a huge (war-justifying) loss in wellbeing if annexation by the PRC only meant a relatively small dip in life satisfaction. This is the possibility I found distressing. Surely there's something we're missing, no?

I think inhabitants of both countries probably have similar response styles to surveys with these scales. Still, if a state is totalitarian, we should probably not be surprised if people are suspicious of surveys. 

Sure, Taiwan could be invaded, and that could put a dampener on things, but, notably, Taiwan is more satisfied than its less likely to be invaded peers of similar wealth and democracy: Japan and South Korea.

I expect one response is, "well, we shouldn't use these silly surveys". But what other existing single type of measure is a better assessment of how people's lives are going? 

Taiwan has about a 0.7 advantage on a 0 to 10 life satisfaction scale, with most recently, 5% more of the population reporting to be happy.

I agree that the agency of newer NATO members (or Ukraine) has been neglected. Still, I don't think this was a primary driver of underestimating Ukraine's chances -- unless I'm missing what "agency" means here. 

I assume predictions were dim about Ukraine's chances at the beginning of the war primarily because Russia and the West had done an excellent job of convincing us that Russia's military was highly capable. E.g., I was disconcerted by the awe/dread with which my family members in the US Army spoke about Russian technical capabilities across multiple domains. 

That said, I think some of these predictions came from a sense that Ukraine would just "give up". In which case, missing the agency factor was a mistake. 

What’s the track record of secular eschatology? 

A recent SSC blog post depicts a dialogue about Eugenics. This raised the question: how has the track record been for a community of reasonable people to identify the risks of previous catastrophes? 

As noted in the post, at different times: 

  • Many people were concerned about overpopulation posing an existential threat (c.f. population bomb, discussed at length in Wizard and The Prophet). It now seems widely accepted that the risk overpopulation posed was overblown. But this depends on how contingent the green revolution was. If there wasn’t a Norman Borlaug, would someone else have tried a little bit harder than others to find more productive cultivars of wheat?
  • Historically, there also appeared to be more worry about the perceived threat posed by a potential decline in population IQ. This flowed from the reasonable-sounding argument “Smart people seem to have fewer kids than their less intellectually endowed peers. Extrapolate this over many generations, and we have an idiocracy that at best will be marooned on earth or at worst will no longer be capable of complex civilization.” I don’t hear these concerns much these days (an exception being a recent Clearer Thinking podcast episode). I assume the dismissal would sound something like “A. Flynn effect.[1] B. If this exists, it will take a long time to bite into technological progress. And by the time it does pose a threat, we should have more elegant ways of increasing IQ than selective breeding. Or C. Technological progress may depend more on total population size than average IQ since we need a few Von Neumann’s instead of hordes of B-grade thinkers.”
  • I think many EAs would characterize global warming as tentatively in the same class: “We weren’t worried enough when action would have been high leverage, but now we’re relatively too worried because we seem to be making good progress (see the decline in solar cost), and we should predict this progress to continue.” 
  • There have also been concerns about the catastrophic consequences of: A. Depletion of key resources such as water, fertilizer, oil, etc. B. Ecological collapse. C. Nanotechnology(???). These concerns are also considered overblown in the EA community relative to the preoccupation with AI and engineered pathogens. 
  • Would communism's prediction about an inevitable collapse of capitalism count? I don't know harmful this would have been considered in the short run since most attention was about the utopia this would afford. 

Most of the examples I’ve come up with seem to make me lean towards the view that “these past fears were overblown because they consistently discount the likelihood that someone will fix the problem in ways we can't yet imagine.” 

But I’d be curious to know if someone has examples or interpretations that lean more towards "We were right to worry! And in hindsight, these issues received about the right amount of resources. Heck they should have got more!"  

What would an ideal EA have done if teleported back in time and mindwiped of foresight when these issues were discovered? If reasonable people acted in folly then, and EAs would have acted in folly as well, what does that mean for our priors?
 

  1. ^

    I can't find an OWID page on this, despite google image searches making it apparent it once existed. Might not have fed the right conversations to have allowed people to compare IQs across countries? 

Load more