Summary: while it seems that Piper's articles for Vox are generally valuable and ought to be shared, other authors seem to be putting out very mixed content which can be useless or vaguely harmful to the world, and should not be shared without first looking and checking whether the overall message is sound or not. Also, it seems to be an inferior source of political analysis compared to some other outlets.

Update: I found that the main article I wrote about here (the Hitler one) was from 2015 and just got updated and republished. In that context it's not such a big deal that it wasn't a good article. I'm not calling them "poor" articles anymore I'll just say they are flawed.

24 years ago today, the world came disturbingly close to ending

Dylan Matthews' claim that nuclear war would cause "much or all of humankind" to suddenly vanish is unsubstantiated. The idea that billions of people worldwide will die from nuclear war is not supported by models with realistic numbers of nuclear warheads. "Much" is a very vague term, but speculation that every (or nearly every) human will die is a false alarm. Now that is easy to forgive, as it's a common belief within EA anyway and probably someone will try to argue with me about it. But Matthews' takeaway from this problem is that we have to "choose" to not "continue to have nuclear weapons." This is definitely an excessively simplistic attitude to have to nuclear security. A unilateral elimination of American nuclear weapons (is that what he means?) will undermine if not destroy our ability to control nuclear security and nonproliferation among other major powers and will severely jeopardize the interests and security of the West. It's obviously a political impossibility, so wishing for it is harmless, but it's still a waste of time and space. Meanwhile, bilateral and multilateral reduction in nuclear arms is a very long process that US and Russian leaders have been pursuing for decades anyway; but it's not clear if it is really useful or successful - there is a very conflicting literature on the viability of arms control agreements.

Kamala Harris’s controversial record on criminal justice, explained

German Lopez's commentary on Harris's career has no direct connection to the 'Future Perfect' goal of "Finding the best ways to do good". The entire article implicitly assumes that "progressive" criminal justice causes are the Good ones, and that many of Harris's actions were bad. Yet Lopez doesn't actually evaluate any of Harris's actions with rigor; he lazily uses 'progressive' and 'tough-on-crime' as substitutes for actually making the world better (or not). Lopez also ignores the legal responsibilities at stake, moralistically assuming that she and her department should have always taken the naively moral side rather than representing the interests of her state as well as the law allows (which was her job). And if we are going to be evaluating presidential candidates, there are much more important issues to look at besides criminal justice. What might they do with foreign aid, for instance?

The philosophical problem of killing baby Hitler, explained

This one is just bizarre. Matthews talks about whether or not it would be right to kill baby Hitler. After expending half the article on Ben Shapiro and the grandfather paradox, Matthews brings up the issue of expected consequences and problems of prediction, something where the silly thought experiment of killing baby Hitler can be used to illuminate real philosophical issues. But his treatment of this issue is very poor.

First, Matthews appears to subscribe to the "Great Man Theory" of history where whole national destinies and international trends can be decided by the unique geniuses and goals of national leaders; this view is widely viewed as discredited, and some of Matthews' hypotheses about alternate realities shows a dismissal of more structural realities (like economics, geography and technology) which are known to be more reliable determinants of trends in human rights, wars, and international relations. Conversely, Matthews thinks "We don't know how much weaker, or stronger, the Nazi Party would've been," which is honestly laughable to anyone who knows the story of Hitler's role in the Nazi Party starting from the time that he became its 55th member. Of course they would have been weaker, even if we don't know exactly how much weaker.

But Matthews seems to think that knowing the specific alternate history of world events is required to judge whether it would have been better or worse. This is mistaken. We might make more general arguments that the expected utility of one world is greater than that of another (e.g: "worlds with fewer aspiring violent antisemitic politicians tend to be better than worlds with more", or something of the sort) without appealing to a specific timeline. We might state that the real history was worse than should be expected for that time period, and argue that killing Hitler would probably improve things just in virtue of regressing to the mean.

Matthews seems to appeal to the fact that there's always some possibility that things would be worse, but this is blatantly misguided to anyone who understands the basic concept of expected value. The mere fact that things might be worse does not mean that the status quo is preferable; the better possibilities must be evaluated and compared. Matthews signals a stubborn refusal to take on this task, but it is an important aspect of Effective Altruism. EA is about pushing more rigor and more critical thought into specifying and evaluating possible impacts, not giving up whenever naive empiricism has to be replaced by conceptual models. EA should encourage people to think and make continuously better predictions, not tell them that it's "totally impossible" to make a useful judgement. Of course, maybe at the end of the analysis, we won't be able to make a judgement either way; but that should happen after a serious analysis, not before. Matthews doesn't refute or even acknowledge this obvious counterargument (widely repeated within EA, and slowly being reached by the philosophical literature on cluelessness and the principle of indifference). Of course there are thinkers in EA who give little weight to conceptual models, but those people have specific reasons for it, and they merely discount the conceptual models - they don't throw their hands up and assert a profound level of collective ignorance, dodging the issue entirely.

Microsoft’s $500 million plan to fix Seattle’s housing problem, explained

Matthews doesn't really explain Microsoft's plan and its possible impacts in detail; he devotes more space to debating whether or not Microsoft is engaging in True Philanthropy or just making a profit. Housing is not typically considered a top EA cause, though 80K Hours does have a reasonably positive writeup on land use reform. We can contrast Matthews' article with Piper's article about Bezos' philanthropy, which effectively uses it to segue into a discussion of cause prioritization and plugs for global health initiatives.

Study: Cory Booker’s baby bonds nearly close the racial wealth gap for young adults

Booker's race-blind policy is highlighted as a way to fix the black-white wealth gap, because it disproportionately helps blacks. This is a strange thing to focus on, compared to examining its effects on poverty more generally. Matthews should spend his time broadly investigating how to make people's lives better, rather than fixating on a particular race. Unequal racial distribution can have important secondary effects, but that should take a backstage to the basic moral imperative to get as many people as possible out of poverty. And perhaps we should get more journalism on something else that would do even more to reduce the black-white income gap: reducing poverty in sub-Saharan Africa.

What Alex Berenson’s new book gets wrong about marijuana, psychosis, and violence

Lopez reviews a book and argues that marijuana is not very harmful. Skimming it doesn't reveal any obvious problems (I didn't read the original book, anyway), and it does carry a good implicit message for looking at scientific data rather than anecdotes, but it mostly seems misplaced, with no serious connection to EA. The recreational benefits of smoking marijuana are minor compared to the moral imperative to fix things like poverty and x-risks, and Lopez himself helpfully points out that marijuana criminality is not a major part of our mass incarceration problem.

Now there are more flawed articles which are listed in the "Future Perfect" column but don't bear the tag. I'm not sure how much of a problem that is.

Now it seems that Piper's articles for Vox are generally valuable and ought to be shared. But other authors seem to be putting out very mixed content which should not be shared without first reading it and checking whether the overall message is sound or not. Matthews' article about Oxfam's inequality report seems good.

For our own EA purposes, if we want informal analysis and commentary about politics and criminal justice, the following sources seem like generally better things to read than these articles: blogs by EAs, op-eds and blogs from good economists and political scientists (like Krugman, Caplan, Hanson, etc), columns at respectable think tanks like Brookings, and any particular wonk or commentator who you happen to know and trust.

36

0
0

Reactions

0
0

More posts like this

Comments25
Sorted by Click to highlight new comments since: Today at 4:05 AM

I would distinguish between poor journalism and not taking a very EA perspective. We shouldn't conflate the two. It's worth noting that Future Perfect is inspired by EA, rather than an EA publication. I also think it's important for the journalists writing to be able to share their own perspective even if it disagrees with the EA consensus. That said, some articles I've seen have been overly ideological or unnecessarily polarising and that worries me.

Agreed. I don't see any "poor journalism" in any of the pieces mentioned. A few of them would be "poor intervention reports" if we chose to judge them by that standard.

"Finding the best ways to do good" denotes intervention reporting.

I'm not trying to foment any kind of outrage against them and I don't expect them to change much, I just want to make sure that people don't treat the content very authoritatively.

I think I'd challenge this goal. If we're choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.

1. Vox seems at least somewhat open to change: Matthews and Ezra seem genuinely pretty EA, they went out on a limb to hire Piper, and they've sacrificed some readership to maintain EA fidelity. Even if they place less-than-ideal priority on EA goals vs. progressivsim, profit, etc., they still clearly place some weight on pure EA.

2. We're unlikely to convince Future Perfect's readers that Future Perfect is bad/wrong and we in EA are right. We can convince core EAs to discredit Vox, but that's unnecessary--if you read the EA Forum, your primary source of EA info is not Vox.

Bottom line: non-EAs will continue to read Future Perfect no matter what. So let's make Future Perfect more EA, not less.

If we're choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.

Tractability matters. Scott Alexander has been critiquing Vox for years. It might be that improving Vox is a less tractable goal than getting EAs to share their articles less.

they went out on a limb to hire Piper, and they've sacrificed some readership to maintain EA fidelity.

My understanding is that Future Perfect is funded by the Rockefeller Foundation. Without knowing the terms of their funding, I think it's hard to ascribe either virtue or vice to Vox. For example, if the Rockefeller Foundation is paying them per content item in the "Future Perfect" vertical, I could ascribe vice to Vox by saying that they are churning out subpar EA content in order to improve their bottom line.

3. I have no personal or inside info on Future Perfect, Vox, Dylan Matthews, Ezra Klein, etc. But it seems like they've got a fair bit of respect for the EA movement--they actually care about impact, and they're not trying to discredit or overtake more traditional EA figureheads like MacAskill and Singer.

Therefore I think we should be very respectful towards Vox, and treat them like ingroup members. We have great norms in the EA blogosphere about epistemic modesty, avoiding ad hominem attacks, viewing opposition charitably, etc. that allow us to have much more productive discussions. I think we can extend that relationship to Vox.

Using this piece as an example, if you were criticizing Rob Wiblin's podcasting instead of Vox's writing, I think people might ask you to be more charitable. We're not anti-criticism -- We're absolutely committed to truth and honesty, which means seeking good criticism -- but we also have well-justified trust in the community. We share a common goal, and that makes it really easy to cooperate.

Let's trust Vox like that. It'll make our cooperation more effective, we can help each other achieve our common goal, and, if necessary, we can always take back our trust later.

kbog
5y14
0
0

Update: Upon further evidence I've decided that this model is more wrong than right, specifically 2 is not really the case, it is more a matter of implicit bias among their staff, they more genuine and less Machiavellian than I thought. So cooperation is easier.

My mental model is this -

1. Vox has produced flawed and systematically biased media for years. (evidence: their previous political reporting and the various criticisms it's received, similar to Breitbart for instance)

2. Vox knows that they have produced flawed and systematically biased media for years, but continues doing it anyway because it maximizes their revenue and it furthers their political goals. (evidence: they're not idiots, and the criticisms seem sound, and they do not retract or walk back from their contested media)

3. If Vox cared significantly about the EA movement, they wouldn't produce flawed and systematically biased media in the EA column.

For this reason I do not give them the benefit of the doubt, though I'm aware that enough messy background and priors are involved to make this disagreement difficult to resolve in any concise conversation.

I agree with the other respondent that Dylan Matthews and Ezra Klein genuinely seem to care about EA causes (Dylan on just about everything, even AI risk [a change from his previous position], and Ezra at least on veganism). Hiring Kelsey Piper is one clear example of this -- she had no prior journalism experience, as far as I'm aware, but had very strong domain knowledge and a commitment to EA goals. Likewise, the section's Community Manager, Sammy Fries, also had a background in the EA community.

It would have been easy for Vox to hire people with non-EA backgrounds who had more direct media experience, but they did something that probably made their jobs a bit harder (from a training standpoint). This seems like information we shouldn't ignore (though of course, for all I know, Sammy and Kelsey may have been the best candidates even without their past EA experience).

Really good journalism is hard to produce, and just like any other outlet, Vox often succumbs to the desire to publish more pieces than it can fact-check. And some of their staff writers aren't very good, at least in the sense that we wish they were good.

But still, because of Future Perfect, there has been more good journalism about EA causes in the last few months than in perhaps the entirety of journalism before that time. The ratio of good EA journalism to bad is certainly higher than it was before.

There is a model you could adopt under which the raw amount of bad journalism matters more than the good/bad ratio, because one bad piece can cause far more damage than any good piece can undo, but you don't seem to have argued that Vox is going to damage us in that sense, and it seems like their most important/central pieces about core EA causes generally come from Kelsey Piper, who I trust a lot.

I agree that some of Vox's work is flawed and systematically biased, but they've also produced enough good work that I hope to see them stick around. What's more, the existence of Future Perfect may lead to certain good consequences, perhaps including:

  • Other news outlets hiring people with EA backgrounds to write on similar topics, following in Vox's footsteps.
  • News outlets using Future Perfect as a source when they write about EA issues (I'd much prefer a journalist learning about AI risk start with Piper than other mass-media articles on the subject).
  • Other EA people working with Vox and gaining valuable insight into how the media works; even if it turns out that we should try not to engage with the media whenever possible, at least having a few people who understand it seems good.
[anonymous]5y9
0
0

I think Vox, Ezra Klein, Dylan Matthews etc would disagree about point 2. Not to put words in someone else's mouth, but my sense is that Ezra Klein doesn't think that their coverage is substantially flawed and systematically biased relative to other comparable sources. He might even argue that their coverage is less biased than most sources.

Could you link to some of the criticisms you mentioned in point 1? I've seen others claim that as well on previous EA Forum posts about Future Perfect, and I think it would be good to have at least a few sources on this. Many EAs outside the US probably know very little about Vox.

I agree generally with your criticisms. It's not particularly surprising given the frequency with which they publish and the variation in quality if Vox's reporting in general.

I would say your advice to read and check the overall soundness of the message before sharing could probably be broadly applied - and strikes me as a bit self-evident. Do you feel like these poor quality FP articles are getting shared widely? Do you have reason to believe it is being done without the sharer reading them?

I have definitely heard people referring to Future Perfect as 'the EA part of Vox' or similar.

Really valuable post, particularly because EA should be paying more attention to Future Perfect--it's some of EA's biggest mainstream exposure. Some thoughts in different threads:

1. Writing for a general audience is really hard, and I don't think we can expect Vox to maintain the fidelity standards EA is used to. It has to be entertaining, every article has to be accessible to new readers (meaning you can't build up reader expecations over time, like a sequence of blog posts or book would), and Vox has to write for the audience they have rather than wait for the audience we'd like.

In that light, look at, say, the baby Hitler article. It has to be connected to the average Vox reader's existing interests, hence the Ben Shapiro intro. It has to be entertaining, so Matthew's digresses onto time travel and Matrix. Then it has to provide valuable information content: an intro to moral cluelessness and expected value.

It's pretty tough for one article to do all that, AND seriously critique Great Man history, AND explain the history of the Nazi Party. To me, dropping those isn't shoddy journalism, it's valuable insight into how to engage your readers, not the ideal reader.

Bottom line: People who took the 2018 EA Survey are twice more likely than the average American to hold a bachelor's degree, and 7x more likely to hold a Ph.D. That's why Robin Hanson and GiveWell have been great reading resources so far. But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

...

(P.S. Small matter, Matthews does not say that it's "totally impossible" to act in the face of cluelessness, unlike what you implied--he says the opposite. And then: "If we know the near-term effects of foiling a nuclear terrorism plot are that millions of people don't die, and don't know what the long-term effects will be, that's still a good reason to foil the plot." That's a great informal explanation. Edit to correct that?)

But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

Is "taking EA mainstream" the best thing for Future Perfect to try & accomplish? Our goal as a movement is not to maximize the people of number who have the "EA" label. See Goodhart's Law. Our goal is to do the most good. If we garble the ideas or epistemology of EA in an effort to maximize the number of people who have the "EA" self-label, this seems like it's potentially an example of Goodhart's Law.

Instead of "taking EA mainstream", how about "spread memes to Vox's audience that will cause people in that audience to have a greater positive impact on the world"?

Agreed. If you accept the premise that EA should enter popular discourse, most generally informed people should be aware of it, etc., then I think you should like Vox. But if you think EA should be a small elite academic group, not a mass movement, that's another discussion entirely, and maybe you shouldn't like Vox.

I was referring to the impossibility of weighing highly uncertain possibilities against each other.

It's pretty tough for one article to do all that, AND seriously critique Great Man history, AND explain the history of the Nazi Party.

Well you don't have to explain it... you just have to not contradict them.

Bottom line: People who took the 2018 EA Survey are twice more likely than the average American to hold a bachelor's degree, and 7x more likely to hold a Ph.D. That's why Robin Hanson and GiveWell have been great reading resources so far. But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

But when looking through most these articles, I don't see plausible routes to growing the EA movement. Some of them talk about things like Givewell charities and high-impact actions people can take, occasionally they mention the EA community itself. But many of them, especially these political ones, have no connection. As you say - this isn't a sequence of blog posts that someone is going to follow over time. They'll read the article, see an argument about marijuana or whatever that happens to be framed in a nicely consequentialist sort of manner, and then move on.

This is kind of worrying to me - I normally think of Vox as a rational ideological outlet that knows how to produce effective media, often misleading, in order to canvass support for its favored causes. Yet they don't seem to be applying much of this capacity towards bolstering the EA movement itself, which suggests that it's really not a priority for them, compared to their favored political issues.

Great discussion here. I'm trying to imagine how most people consume these articles. Linked from the Vox home page? Shared on Facebook or Twitter? Do they realize they aren't just a standard Vox article? Some probably barely know what Vox is. Certainly, we are all aware of the connection to EA, but I bet most readers are pretty oblivious.

In that case, maybe these tangentially related or unrelated articles don't matter too much. Conversely, the better articles may spark an interest that leads a few people towards finding out more about EA and becoming involved.

2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn't take sides on controversial political issues, though of course individual EAs are free to.

Robin Hanson's "pulling the rope sideways" analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it's rarely effective to pick a side and pull. First, you're one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody's going to filibuster to prevent you from handing out malaria nets-- everybody thinks it's a good idea.

(This doesn't mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer--they're both nonpartisan ways to do good in politics.)

If EA were officially apolitical like this, we would benefit by Hanson's logic: we can more easily achieve our goals without enemies, and we're more likely to be right. But we'd could also gain credibility and influence in the long run by refusing to enter the political fray.

I think part of EA's success is because it's an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I'd say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.

But I could imagine an EA failure mode where, a decade from now, Vox is the most popular "EA" platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels--kinda how animal rights and environmentalism could be their own identities, but they've mostly been absorbed into the political left.

If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice--improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?

This is an interesting essay. My thinking is that "coalition norms", under which politics operate, trade off instrumental rationality against epistemic rationality. I can argue that it's morally correct from a consequentialist point of view to tell a lie in order to get my favorite politician elected so they will pass some critical policy. But this is a Faustian bargain in the long run, because it sacrifices the epistemology of the group, and causes the people who have the best arguments against the group's thinking to leave in disgust or never join in the first place.

I'm not saying EAs shouldn't join political coalitions. But I feel like we'd be sacrificing a lot if the EA movement began sliding toward coalition norms. If you think some coalition is the best one, you can go off and work with that coalition. Or if you don't like any of the existing ones, create one of your own, or maybe even join one & try to improve it from the inside.

We should mostly treat political issues like other issues - see what the evidence is, do some modeling, and take sides. There isn't a clean distinction between what the movement believes and what individuals believe; there are just points of view that are variously more or less popular. If a political issue becomes just as well-backed as, say, Givewell charities, then we should move towards it. In both cases people are allowed to disagree of course.

However, political inquiry must be done to higher epistemic standards, with extra care to avoid alienating people. Vox has fallen below this threshold for years.

Hi OP! Thanks for writing this up. A few comments on the section about Booker's policy proposal.

1) I agree that journalists should focus more on poverty alleviation in the poorest parts of the world, such as sub-Saharan African countries. Fortunately, Future Perfect (FP) does cover global poverty reduction efforts much more than most mainstream media outlets. Now, you are right that the piece on Booker's proposal is part of a tendency for FP to focus more on US politics and US poverty alleviation than most EA organisations. However, I think this approach is justified for (at least) two reasons: a) For the foreseeable future, the US will inevitably spend a lot more on domestic social programs than on foreign aid. Completely neglecting a conversation about how the US should approach social expenditure would, I believe, be a huge foregone opportunity to do a lot of good. Yes, a big part of EA is to figure out which general cause areas that should receive most attention. But I believe that EA is also about figuring out what the best approaches are within different important cause areas, such as poverty in the US. I think that FP doing this is a very good thing. b) Part of the intended audience for FP (rightly) cares a lot about poverty in the US. Covering this issue can be a way of widening the FP audience, thus bringing much-needed attention to other important issues also covered by FP, such as AI safety.

2) I personally agree with the "basic moral imperative to get as many people as possible out of poverty" as you call it. But, without getting deep into normative ethics, I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a "strange thing to focus on" if you assume, with great confidence, utilitarianism to be true.

3) Even if one assumes utilitarianism to be true, there are solid arguments for focusing on racial inequity in the US. Efforts to support people of colour specifically in the US is not just to "fixate" on an arbitrarily selected race. It is to fixate on a group of people who have been systematically downtrodden for most of US history and who until very recently (if not still) have been discriminated against by the government in ways that have kept them from prospering. (For anyone curious about this claim, I strongly encourage you to read this essay for historical context.) I totally agree with you that "unequal racial distribution can have important secondary effects", and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds. You argue that this "should take a backstage" to general poverty alleviation. I actually agree, and that is also how the EA movement is already acting and prioritising. But 'taking a backstage' does not have to (and should not) mean being completely neglected, and I for one really appreciate that FP is applying the methods and concepts of effective altruism to a wider range of issues.

Cheers! :)

Joshua, former Co-President of Yale EA.

Re: #1, the overall distribution of articles on different topics is not particularly impressive. There are other outlets (Brookings, at least) which focus more on global poverty.

I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a "strange thing to focus on" if you assume, with great confidence, utilitarianism to be true.

I think that arguing from moral theories is not really the right approach here, instead we can focus on the immediate moral issue - whether it is better to help someone merely because they or their ancestors were historically mistreated, holding welfare changes equal. There is a whole debate to be had there, which has plenty of room for eclectic arguments that don't assume utilitarianism per se.

The idea that it's not better is consistent with any consequentialism which looks at aggregate welfare rather than group fairness, and some species of nonconsequentialist ethics (there is typically a lot of leeway and vagueness in how these informal ethics are interpreted and applied, and academic philosophers tend to interpret them in ways that reflect their general political and cultural alignment).

I totally agree with you that "unequal racial distribution can have important secondary effects", and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds.

Sure, but practically everything should get attention by this rationale. The real question is - how do we want to frame this stuff? What do we want to implicitly suggest to be the most important thing?

[anonymous]5y1
0
0
Dylan Matthews' claim that nuclear war would cause "much or all of humankind" to suddenly vanish is unsubstantiated. The idea that billions of people worldwide will die from nuclear war is not supported by models with realistic numbers of nuclear warheads. "Much" is a very vague term, but speculation that every (or nearly every) human will die is a false alarm. Now that is easy to forgive, as it's a common belief within EA anyway and probably someone will try to argue with me about it.

Could you expand on this or give sources? I do hear EAs talking about nuclear war and nuclear winter being existential threats.

Well, deaths from nuclear explosions will obviously be a small minority of the world.

Large numbers of people will survive severe fallout: it's fairly easy to build safe shelters in most locations. Kearny's Nuclear War Survival Skills shows how feasible it is. Governments and militaries of course know to prepare for this sort of thing. And I think fallout doesn't become a truly global phenomenon, it is only deadly if you are downwind from the blast sites.

Here is one of the main nuclear winter studies, that uses a modern climate model. They assume the use of the entire global arsenal (including 10,000 US and 10,000 Russian weapons) to get their pessimistic 150 Tg scenario, which has a peak cooling of 7.5 degrees celsius. That will still leave large parts of the world with a relatively warm temperatures. However, the US has already gone down to 1,800 weapons in the actual strategic arsenal, with 4,000 in the general stockpile. Russia's stockpile is 7,850 with only 1,600 in the strategic arsenal. The use of all nuclear weapons in a war is an unrealistic assumption because countries have limited delivery systems, they'll want to still have some nuclear weapons in case they survive, and they won't want to cripple their own country and allies with excessive global cooling. Also, the assumption that all weapons will detonate is unrealistic - missile defense systems and force-on-force strikes will destroy many nuclear weapons before they can hit their targets. So even their moderate 50 Tg scenario with 3.5 degrees celsius of cooling seems implausible, it would still require nearly 7,000 weapons detonated. It seems like we are really looking at 2-3 degrees celsius cooling from an unlimited exchange - approximately enough to cancel out past and future global warming. The temperature also recovers quite a bit in just a few years.

The issue is that there are many sources of uncertainty of nuclear winter. When I developed a probabilistic model taking all these sources into account, I did get a median impact that was 2-3 C reduction (though I was also giving significant probability weight to industrial and counterforce strikes). However, I still got a ~20% probability of collapse of agriculture.

More from kbog
76
kbog
· 4y ago · 37m read
55
kbog
· 4y ago · 15m read
Curated and popular this week
Recent opportunities in Building effective altruism