Hide table of contents
2 min read 27

293

I have just published my new book on s-risks, titled Avoiding the Worst: How to Prevent a Moral Catastrophe. You can find it on Amazon, read the PDF version, or listen to the audio version.

The book is primarily aimed at longtermist effective altruists. I wrote it because I feel that s-risk prevention is a somewhat neglected priority area in the community, and because a single, comprehensive introduction to s-risks did not yet exist. My hope is that a coherent introduction will help to strengthen interest in the topic and spark further work.

Here’s a short description of the book:

From Nineteen Eighty-Four to Black Mirror, we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a better path?

In Avoiding the Worst, Tobias Baumann lays out the concept of risks of future suffering (s-risks). With a focus on s-risks that are both realistic and avoidable, he argues that we have strong reasons to consider their reduction a top priority. Finally, he turns to the question of what we can do to help steer the world away from s-risks and towards a brighter future.

For a rough overview, here’s the book's table of contents:

Part I: What are s-risks?

Chapter 1: Technology and astronomical stakes

Chapter 2: Types of s-risks

Part II: Should we focus on s-risks?

Chapter 3: Should we focus on the long-term future?

Chapter 4: Should we focus on reducing suffering?

Chapter 5: Should we focus on worst-case outcomes?

Chapter 6: Cognitive biases

Part III: How can we best reduce s-risks?

Chapter 7: Risk factors for s-risks

Chapter 8: Moral advocacy

Chapter 9: Better Politics

Chapter 10: Emerging technologies

Chapter 11: Long-term impact

And finally, some blurbs for the book:

“One of the most important, original, and disturbing books I have read. Tobias Baumann provides a comprehensive introduction to the field of s-risk reduction. Most importantly, he outlines sensible steps towards preventing future atrocities. Highly recommended.”

— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“This book is a groundbreaking contribution on a topic that has been severely neglected to date. Tobias Baumann presents a powerful case for averting worst-case scenarios that could involve vast amounts of suffering. A much needed read for our time.”

— Oscar Horta, co-founder of Animal Ethics and author of Making a Stand for Animals


 

Comments28


Sorted by Click to highlight new comments since:

Congratulations on the book!

Apart from the utterly horrifying & well-known Black Mirror episode 'White Christmas', another viscerally compelling depiction of s-risks was the Iain M. Banks 'Culture' novel 'Surface Detail' (2010), in which a futuristic society with a strong religious fundamentalist streak uploads recently-dead minds into digital virtual hells just to torment them for subjective millennia -- mostly in order to intimidate the living into righteousness.

Maybe you mention it, but if you're not familiar with it, it's one of the more plausible depictions of Things Going Very Wrong Indeed, in terms of net sentient utility.

My go-to is this (warning: horrifying) 1 minute comic. I credit it for me viscerally getting just how important s-risks are.

There's always factory farm footage too. Dominion and Earthlings are the best for this.

I think it's no surprise that people who were previously in animal welfare end up going into s-risks. It makes you realize how very plausible massive scale suffering is, even if there are no malevolent actors. 

Oh no. Very horrifying! 

Not entirely clear why the sadistic robots would do such a thing. 

One thing I liked about the novel 'Surface Detail' was that the sadists imposing the suffering had at least some kind of semi-plausible religious rationale for what they were doing -- which makes the whole scenario more psychologically plausible and therefore all the more terrifying.

Yeah, I agree it's not clear why they'd do it. I give the comic writer some slack though, since it's hard to fit that much into a comic. 

Couple reasons that I can think of off the top of my head that that could happen:

  • Sign flip. Accidentally flip the sign and instead of trying to maximize human flourishing, it's trying to minimize it. 
  • Punishment. Imagine a dictator created TAI and was using it to punish people that fit a certain demographics (e.g. Uyghurs). Imagine that the human there is a Uyghur, or that they failed to sufficiently specify the demographic and it started doing everybody or large swathes of the world 

Honestly though, I think the most probable s-risks are the incidental ones (covered in Tobias's book and also this blog post here). Basically, something where suffering is a side-product, like factory farming or slavery. I also put highest odds it would be for digital minds, since I think the future will be predominantly digital minds.  

But it'd be very hard to make a comic about digital minds that would be emotionally compelling, which is why I like the comic (although "like" is a bit of a strong word. More, "found incredibly psychologically scarring but in a way that helps me remember what I'm fighting for")

This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

I am (clearly) not Tobias, but I'd expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you're already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it's probably worth reading even if you've checked out those older resources, despite some overlap.

I agree with this answer.

I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

That's great, because that is also the starting point of my book. From the introduction:

Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is experienced. A future individual is no less (and no more) deserving of moral consideration than someone alive now. So the fact that a moral catastrophe takes place in the distant future does not reduce the urgency of preventing it, if we have the means to do so. I will assume that you broadly agree with these fundamental values, which form the starting point of the book.

That is, I'm not dwelling on an argument for these fundamental values, as that can be found elsewhere.

Congratulations. : ) any plans for an audiobook?

Thanks! 

An audiobook is a good idea and I'll look into it, though I don't expect it to be done any time soon (i.e. it would at least take several months, I think).

Audiobook version: [new] Aaron made an awesome audiobook version here. 

[Original] It's easy to turn it into an audiobook version with Evie or Natural Reader for anybody who likes to read with their ears instead of their eyes. Full guide I wrote up on how to turn everything into audio here

Also, Tobias, if you want to make a super simple audiobook version of the book, I recommend using Amazon Polly. It'll probably cost under $100 and take less than 10 hours and increase the number of people who read your book by a lot. I know a ton of people who only read with their ears or who are more than 10x likely to read something if there's an audio version. Even I only found out about this book because I listened to the article on the Nonlinear Library (sorry for the shameless but relevant plug 😛)

Finally, congrats on the book! So far I'm loving it. Thank you for writing it. I think s-risks deserve more attention in the EA movement and think this book will help move the needle. 

I’m working on getting this to be read in a high quality voice, but for anyone else who wants to try/do it better, here’s a txt with what I think the voice should actually read (like no reading out page numbers and stuff like that): https://drive.google.com/file/d/155pb6tMkE-rSmrizi1jbzPKTMSmYJRDX/view?usp=drivesdk

Not 100% certain it’s correct though, likely a missing word or two

Update: provisional text to speech audio (Siri reading that text file haha) now linked the top of the post!

Should be up on all the main podcast apps soon, but until then, you can listen on Spreaker at https://bit.ly/3UmQS85

Youtube (w/ accurate subtitles, embedded below)

Apple Podcasts:

Spotify

RSS feed url:  https://www.spreaker.com/show/5706170/episodes/feed

Drive folder with some scripts and other resources; may add more later!

(Will try to clean the thumbnail of my name and Spreaker logo, but don't want to wait to share! )

...

Not certain and might take a couple days, but should be pretty easy (though not instantaneous) to  basically make readouts in any English accent the new IOS can handle ('British full version.m4a' in the Drive folder is a sample, but has pronunciation issues due to weird text encoding) 

We've now put together a new and improved audio version, which can be found here.

Just listened to it! The pleasant and thoughtful narration by Adrian Nelson felt perfect for the book. I might even recommend the audiobook version over the text version to people who might otherwise find it distressing to think about s-risks. :)

You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

Yes, Amazon Polly is great! 

Small thing: British voices sound more credible, which is good, but at the trade-off of being harder to listen to at high speeds, which is my strong preference. 

There are probably not a lot of people who are listening to it at high enough speeds that the trade-off is worth it, but that is the trade-off to consider. 

Also, my research for the Nonlinear Library found that on average people prefer listening to male voices, for what it's worth.  I didn't research it hard or for long and don't think it matters a ton either way, but just to share what I found. 

I'll just throw out the possibility of copy and pasting the whole thing (with anywhere from zero to a lot of formatting/editing) into an EA Forum post, which (I assume?) would trigger the Nonlinear Library system to turn it into audio. This would also get it into the feeds of people who only consume the forum via podcast app. 

Since this has generated so much interested it is worth noting that the Center for Reducing Suffering is hiring for a Communications Director, so if you are knowledgable about s-risks do check out the job description.

Thank you Tobias! I've wanted to learn more about the practical implications of s-risks for a while but never quite knew where to start, I'm really keen to read Part III.

I'm glad risks from Scotland are finally being explored! (Can't unsee the Scottish flag on the cover)

I hate to be this person, but is there an epub version available? 

The easiest way to download it as an epub is here.

Thank you! :)

Congratulations! It's on my reading list now.

Anyone interested in an in-person-in-London reading group on this?

+2 private expressions of interest

[comment deleted]7
0
0
Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies