The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism").
Will MacAskill describes long-termism as:
I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.
In The Very Short Run, We're All Dead
AI alignment is a central example of a supposedly long-termist cause.
But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.
Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one.
But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?"
Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.
The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100.
The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.
In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years.
Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they're thinking of all the future generations who would never be born if humanity was wiped out.
In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.
When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.
"Long-termism" vs. "existential risk"
Philosophers shouldn't be constrained by PR considerations. If they're actually long-termist, and that's what's motivating them, they should say so.
But when I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
I'm interested in hearing whether other people have different reasons for preferring the "long-termism" framework that I'm missing.
Hey Scott - thanks for writing this, and sorry for being so slow to the party on this one!
I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)
The main view I’m putting forward in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.
What is longtermism?
The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.
My current preferred slogan definitions of each:
In WWOTF, I promote the weak... (read more)
Thanks for writing this! That overall seems pretty reasonable, and from a marketing perspective I am much more excited about promoting "weak" longtermism than strong longtermism.
A few points of pushback:
- I think that to work on AI Risk, you need to buy into AI Risk arguments. I'm unconvinced that buying longtermism first really shifts the difficulty of figuring this point out. And I think that if you buy AI Risk, longtermism isn't really that cruxy. So if our goal is to get people working on AI Risk, marketing longtermism first is strictly harder (even if it may be much easier)
- I think that very few people say "I buy the standard AI X-Risk arguments and that this is a pressing thing, but I don't care about future people so I'm going to rationally work on a more pressing problem" - if someone genuinely goes through that reasoning then more power to them!
- I also expect that people have done much more message testing + refinement on longtermism than AI Risk, and that good framings could do much better - I basically buy the claim that it's a harder sell though
- Caveat: This reasoning applies more to "can we get people working on AI X-Risk with their careers" more so than things like broad s
... (read more)On this particular point
I can't find info on Rethink's site, is there anything you can link to?
Of the three best-performing messages you've linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase 'ensure a good future' is a large part of what resonates.
All that said, more info on the tests they ran would obviously update my position.
This seems correct to me, and I would be excited to see more of them. However, I wouldn't interpret this as meaning 'longtermism and existential risk have similarly-good reactions from the educated general public', I would read this as risk messaging performing better.
Also, messages 'about unspecified, and not necessarily high-probability threats' is not how I would characterize most of the EA-related press I've seen recently (NYTimes, BBC, Time, Vox).
(More ... (read more)
I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.
However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'
In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.
But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views ... (read more)
Based on my memory of how people thought while growing up in the church, I don't think increasing the number of saveable souls is something that makes sense for a Christian -- or even any sort of long termist utilitarian framework at all.
Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.
Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:
- Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don't make much sense.
- S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
- The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don't seem totally implausible. The world is b
... (read more)Why not?
An existential risk is a risk that threatens the destruction of humanity's long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).
See also Neel Nanda's recent Simplify EA Pitches to "Holy Shit, X-Risk".
No offense to Neel's writing, but it's instructive that Scott manages to write the same thesis so much better. It:
...and a ton of other things. Long-live the short EA Forum post!
FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.
Your comment inspired me to work harder to make my writings more Scott-like.
Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.
I didn't mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman's podcast) are pointing out that the existential risk argument can go through without the longtermism argument.
I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn't mean they have to be, or are, committed to others.
No worries, I'm excited to see more people saying this! (Though I did have some eerie deja vu when reading your post initially...)
I'd be curious if you have any easy-to-articulate feedback re why my post didn't feel like it was saying the same thing, or how to edit it to be better?
(EDIT: I guess the easiest object-level fix is to edit in a link at the top to your's, and say that I consider you to be making substantially the same point...)
I'm not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions - longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions - but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn't seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
- Extinction versus Global Catastrophic Risks (GCRs)
- It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
- To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
- Sensitivity to views of risk
- Some people may be more sceptical of x-risk estimates this century, but might still reach the
... (read more)ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.
See also The person-affecting value of existential risk reduction by Gregory Lewis.
I don't have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.
I. In a "longetermist" framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a "short-termist" view with, say, a discount rate of 5%, it is not nearly so clear.
The short-termist urgency of x-risk ("you and everyone you know will die") depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions about esoteric mathematical concepts. It is relatively easy to come wit... (read more)
It's not clear the loss of human life dominates the welfare effects in the short term, depending on how much moral weight you assign to nonhuman animals and how their lives are affected by continued human presence and activity. It seems like human extinction would be good for farmed animals (dominated by chickens, fish and invertebrates), and would have unclear sign for wild animals (although my own best guess is that it would be bad for wild animals).
Of course, if you take a view that's totally neutral about moral patients who don't yet exist, then few of the nonhuman animals that would be affected are alive today, and what happens to the rest wouldn't matter in itself.
I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.
I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term's agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
I think that the longtermist EA community mostly acts as if we're close to the hinge of history, because most influential longtermists disagree with Will on this. If Will's take was more influential, I think we'd do quite different things than we're currently doing.
I'd love to hear what you think we'd be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we'd be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we're doing. Would you just expect a bigger focus on investment? I'm not sure I would, given how much EA is poised to grow and how comparably little we've spent so far. (Cf. Phil Trammell's disbursement tool https://www.philiptrammell.com/dpptool/)
I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.
I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs what we should do (reduce it). So you wouldn’t need much more global priorities research. You would still need more granular research into how to reduce x-risk though.
More is also being said on the possibility of investing for the future financially which isn’t a great idea if we’re at the most influential time in history.
I agree the movement is mostly “hingy” in nature but perhaps not to the same extent you do. 80,000 Hours is an influential body that promotes EA community building, global priorities research, and to some extent investing for the future.
My point is that you could engage in "x-risk community building" which may more effectively get people working on reducing x-risk than "EA community building" would.
I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.
And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.
Some loose data on this:
Of the ~900 people who filled my Twitter poll about whether we lived in the most important century, about 1/3 said "yes," about 1/3 said "no," and about 1/3 said "maybe."
As Nathan Young mentioned in his comment, this argument is also similar to Carl Shulman's view expressed in this podcast: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:
It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.
If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care a... (read more)
Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better.
In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don't die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can't influence the long-term future).
I've been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let's try to avoid billions of people dying this century.
Scott, thanks so much for this post. It's been years coming in my opinion. FWIW, my reason for making ARCHES (AI Research Considerations for Human Existential Safety) explicitly about existential risk, and not about "AI safety" or some other glomarization, is that I think x-risk and x-safety are not long-term/far-off concerns that can be procrastinated away.
https://forum.effectivealtruism.org/posts/aYg2ceChLMRbwqkyQ/ai-research-considerations-for-human-existential-safety (with David Krueger)
Ideally, we need to engage as many researchers as possible, thinking about as many aspects of a functioning civilization as possible, to assess how A(G)I can creep into those corners of civilization and pose an x-risk, with cybersecurity / internet infrastructure and social media being extremely vulnerable fronts that are easily salient today.
As I say this, I worry that other EAs will get worried that talking to folks working on cybersecurity or recommender systems necessarily means abandoning existential risk as a priority, because those fields have not historically taken x-risk seriously.
However, for better or for worse, it's becoming increasingly e... (read more)
I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.
Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.
Imagine it's 2022. You wake up and check the EA forum to see that Scott Alexander has a post knocking the premise of longtermism and it's sitting in at 200 karma. On top, Holden Karnofsky has a post saying he may be only 20% convinced that x-risk itself is overwhelmingly important. Also, Joey Savoie is hanging in there.
Obviously, I’ll write in to support longtermism.
Below is a one long story about how some people might change their views, in this story, x-risk alone wouldn't work.
TLDR; Some people think the future is really bad and don't value it. You need something besides x-risk, to engage them, like a competent and coordinated movement to improve the future. Without this, x-risk and other EA work might be meaningless too. This explanation below has an intuitive or experiential quality, not numerical. I don't know if this is actually longtermism.
Many people don't consider future generations valuable because they have a pessimistic view of human society. I think this is justifiable.
Then, if you think society will remain in its current state, it's reasonable that you might not want to preserve it. If you only ever think about one or two generations into the future, like I think most people do, it's hard to see the possibility of change. So I think this "negative" mentality is self-reinforcing, they're stuck.
To these people, the idea of x-risk doesn't make sense, not because these dangers aren't real but because there isn't anything to preserve. To these people, giant numbers like 10^30 are really, especially unconvincing, because they seem silly and, if anything, we owe the future a small society.
I think the above is an incredibly ma... (read more)
Are there actually any short-termists? Eg. people who have nonzero pure time preference?
Can't you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.
The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount. To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.
Yes! Thanks for this Scott. X-risk prevention is a cause that both neartermists and longtermists can get behind. I think it should be reinstated as a top-level EA cause area in it's own right, distinct from longtermism (as I've said here).
It's a sobering thought. See also: AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.
Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).
Here is an argument to the contrary- "the civilization dice roll": Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.
Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mo... (read more)
Agreed. Linch's .01% Fund post proposes a research/funding entity that identifies projects that can reduce existential risk by 0.01% for $100M-$1B. This is 3x-30x as cost-effective as the quoted text and targeting a reduction 100x the size.
I have been working on a tweet length version of this argument for a while. I encourage someone to beat me to it. I agree with Neel and Scott (and Carl Shulman) that this argument is much more succinct and emotive and I think I should get better at making it.
Something like:
[quote tweeting a poll on survival to 2100] 38% of my followers think there is a > 5% chance all humans are dead by 2100. Let's assume they are way wrong and it's only .5%.
[how does this compare to other things that might kill you]
[how does this compare in terms of spending to how much ought to be spent to how much is]
GiveDirectly could get pretty high probabilities (or close for a smaller number of people at lower cost), although it's not the favoured intervention of those focused on global health and poverty.
Another notable remaining difference is that extinction is all or nothing, so your ... (read more)
A key difference also surrounds which risks to care about more
and what to do about them
If I don’t have a total population utilitarian view (which seems to me like the main crux belief of longtermism) I may not care as much about the extinction part of the risks.
Michael Wiebe comments: "Can we please stop talking about GDP growth like this? There's no growth dial that you can turn up by 0.01, and then the economy grows at that rate forever. In practice, policy changes have one-off effects on the level of GDP, and at best can increase the growth rate for a short time before fading out. We don't have the ability to increase the growth rate for many centuries."
"Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.
This is the first I have seen reference to norm changing in EA. Is there other writing on this idea?
Hello
At a lecture I attended, a leading banker said "long term thinking should not be used as an excuse for short term failure". At the time, he was defending short term profit making as against long term investment, but when applied to discussions of longtermist the point is similar. Our policies and actions can only be implemented in the present and must succeed in the short term as well as the long term. This means careful risk assessment/management but as the future can never be predicted with absolute certainty,the long term ef... (read more)
I'm not sure how we can expect the public, or even experts, to meaningfully engage a threat as abstract, speculative and undefined as unaligned AI when very close to the entire culture, including experts of all kinds, relentlessly ignores the very easily understood nuclear weapons which literally could kill us all right now, today, before we sit down to lunch.
What I learned from studying nuclear weapons as an average citizen is th... (read more)