Hide table of contents

In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least—the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in 10 million. Our leverage on the future is high just now.

— Carl Sagan

 

Future Matters is a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter.


Research

​​Stefan Schubert's Against cluelessness notes that, while it is generally very hard to predictably affect the long-term future, this fails to constitute a decisive objection to longtermism. Amidst this "sea of cluelessness", we find "pockets of predictability", or opportunities for making long-lasting changes that are positive in expectation. Schubert describes two types of intervention that escape these worries. First, attempts to reduce short-term risks to our long-term potential: such risks are tractable because they are located in the near-term, but still have longtermist significance.  Threats of this type include risks of human extinction as well as risks of value lock-in. Second, efforts to build longtermist capacity, such as increasing the size of the longtermism community or its financial resources, or improving the reputation or knowledge of that community. This capacity can be built over a period of decades or centuries, until good enough opportunities to robustly improve the long-term future emerge.

Joseph Carlsmith's presentation on existential risk from power-seeking AI (video and transcript) summarizes the author's comprehensive report published in April last year. Carlsmith focuses specifically on AI systems defined by the possession of three key properties: advanced capability, agentic planning, and strategic awareness (APS, for short), and then relies on this construct to develop an explicit argument for the conclusion that AI poses an existential risk to humanity:

  1. It will become possible to develop APS systems.
  2. There will be strong incentives to deploy APS systems.
  3. It will be much harder to build aligned than misaligned APS systems.
  4. Misaligned APS systems, if deployed, will fail in high-impact ways.
  5. A failure from a misaligned APS system will permanently disempower humanity.
  6. A disempowerment of humanity will constitute an existential catastrophe.

Summarizing all the considerations Carlsmith adduces for each of the premises in the argument is beyond the scope of this newsletter, but we encourage the reader to check out the author's commendably lucid talk for the relevant details.

Lizka Vaintrob's Against "longtermist" as an identity describes two possible harms of self-identifying as a "longtermist". The first is that the label includes factual claims, e.g. that trying to improve the long-term is generally more cost-effective than improving the world in other ways, which makes it harder to update in response to evidence against these claims. The second is that the label causes people to uncritically accept wholesale the cluster of views associated with longtermism, instead of evaluating each view on its merits. We think that these are valid worries, though it's unclear to us how specific to "longtermist" they are: the second seems to also apply to "effective altruist", and the first applies to other labels which do not seem obviously objectionable (e.g. "pacifist").

We will soon create AIs with moral status, whose interests should be protected and factored into our decision-making. Welcoming digital minds into society will require substantial revisions to our moral and political thinking. Nick Bostrom and Carl Shulman’s Propositions Concerning Digital Minds and Society lays the groundwork for this important project, setting out over one hundred tentative theses. The pace of recent technological developments makes this all the more urgent, and we hope to see much more work on this topic. 

William MacAskill's EA and the current funding situation identifies two types of risks associated with the current influx of funding. Risks of commission have attracted much discussion on the EA Forum and elsewhere,[1] and include appearances of extravagance, erosion of critical ability, and fostering of resentment, among other concerns. By contrast, risks of omission have received virtually no attention despite being, in MacAskill's opinion, at least as concerning. For one thing, it is very hard to substantially scale up giving and still give effectively. For another, the costs of failure from insufficient caution are much more visible than the costs of failure from excessive caution. MacAskill proposes we respond to this tension with an attitude of "judicious ambition"—a willingness to take bold action, while remaining cognizant of the risks involved.

Nick Beckstead's Some clarifications on the Future Fund's approach to grantmaking addresses a number of questions and concerns related to the Future Fund's grantmaking so far. Beckstead notes that (1) the Future Fund's process for approving grants involves significantly more scrutiny than is generally assumed, going through several review rounds by different staff members and technical experts; that (2) while the actual team is very small, it relies extensively on a very large number of regrantors and external advisors; that (3) contrary to common perception, the Future Fund hasn't funded many community-building projects; and that (4) the Fund pays considerable attention to downside risks and community effects of their grantmaking. Beckstead concludes that some of the confusion resulted because the Fund has under-communicated, and announces a plan to publish a review of their work next month.

Lucius Caviola, Erin Morrisey & Joshua Lewis's Most students who would agree with EA ideas haven't heard of EA yet reports the results of a large-scale survey of students at New York University. The primary finding is that, out of 8.8% of students who were highly sympathetic to effective altruism, only 1.3% of students (less than 15%) were actually familiar with the movement. As the authors acknowledge, it's unclear—because of the attitude-behavior gap—how much high sympathy predicts high engagement. And it is also unclear what the implications for outreach are: the existence of such a large reservoir of positively inclined students still unaware of EA may perhaps suggest that, in Owen Cotton-Barratt's awareness-inclination model, publicity should be prioritized over advocacy.

There is a small literature in economic history that tries to understand the role of historic events on modern outcomes. This is of clear relevance to longtermists interested in shaping the long-run trajectory of humanity going forward. Pablo Villalobos and Jaime Sevilla’s Potatoes: A Critical Review scrutinizes one eye-catching result: Nunn and Qian’s claim that the introduction of potatoes was a major determinant of population growth and urbanisation in the Old World. They replicate the analysis and run a number of statistical tests, tentatively concluding that the paper’s claim is well-supported.

In How we fixed the ozone layer, Hannah Ritchie looks at a recent example of humanity solving a global coordination problem. After scientists established a link between human emissions and ozone depletion in 1974, political action was relatively swift. An international agreement to phase out ozone-depleting substances was signed in 1987, global use of these chemicals fell precipitously, and the ozone began to recover. Unfortunately, this success story doesn’t offer much comfort when it comes to other risks. Compared with e.g., climate change, it was a much easier coordination problem to solve—the problem was due to one specific industry (vs. the whole economy) and the near-term harms from ozone depletion would have been disproportionately felt by richer nations (vs poorer ones).

The arrival of advanced nanotechnology would have transformative impacts on the world, and could even pose an existential risk. Ben Snodin’s Thoughts on Nanotechnology Strategy Research offers an excellent overview of work in this area, which has received limited attention from longtermists in recent years. Snodin estimates a 4–5% chance that advanced nanotech arrives before 2040. 

Owen Cotton-Barratt makes a case Against immortality, presenting a few reasons why a world without death might not be good, contra the fairly widely-held pro-immortality views among transhumanists and other longtermist-adjacent communities.[2] A comment from Linch Zhang prompts an interesting discussion about the "immortal dictators" argument.

James Smith & Jonas Sandbrink look at Biosecurity in the age of open science (Also covered in WIRED.) Publishing work on preprint servers has taken off in biology, particularly since March 2020. This enables researchers to quickly disseminate findings without lengthy peer review. Unfortunately, researchers sometimes publish well-intentioned research that could nonetheless prove dangerous (e.g. how to synthesize viruses) in the hands of malicious actors (e.g. terrorists). The authors make some sensible recommendations for mitigating these risks, while retaining the important benefits of open science.[3]

Owen Cotton-Barratt suggests longtermists should spend more time answering the question, What do we want the world to look like in 10 years? While we often have a sense of the long-run outcomes we’re aiming for (e.g. safe AGI), and some plans for getting there (e.g. more alignment research), there’s not much discussion about what success looks like on intermediate timescales. 

An 80,000 Hours problem profile by Benjamin Hilton considers whether climate change is the greatest threat facing humanity today. Climate change will have some extremely bad effects, including making us more vulnerable to other threats; but it is very unlikely (~1 in 10,000) to destroy humanity. Overall, we should be doing much more about it. But individuals trying to maximize their impact, without a strong comparative advantage in climate change, should probably work on problems that are more important and neglected, such as 80,000 Hours’ highest priority areas.

Douglas Ligor and Luke Matthews's Outer space and the veil of ignorance proposes a framework for thinking about space regulation. The authors credit John Rawls with an idea actually first developed by the utilitarian economist John Harsanyi: that to decide what rules should govern society, we must ask what each member would prefer if they ignored in advance their own position in it. The authors then note that, when it comes to space governance, humanity is currently behind a de facto veil of ignorance. As they write, "we still do not know who will shoulder the burden to clean up our space debris, or which nation or company will be the first to capitalize on mining extraterrestrial resources." Since the passage of time will gradually lift this veil, and reveal which nations benefit from which rules, the authors argue that this is a unique time for the international community to agree on binding rules for space governance.


News

EA Global: San Francisco 2022 is July 30th. Applications are open until July 14th.

DeepMind is hiring for a number of roles on their Alignment and Scalable Alignment teams. EA Forum post with details on the teams’ work, and how to apply. 

Open Philanthropy is seeking proposals to quantify biological risks. Applications are due by June 5th. Read more and apply here.

The Future of Life Institute announced the 20 finalists in their Worldbuilding Contest, which sought creative writing on positive visions for a post-AGI world. 

The Legal Priorities Project announced a summer fellowship for students and recent graduates. Apply before June 17th. 

The ML Safety Scholars Program is a 9-week summer course for undergraduates to gain skills relevant to AI safety work. Applications are due May 31st.

Richard Yetter Chappell, who has for nearly two decades run a highly original, engaging, and wide-ranging philosophy blog, recently moved to Substack.

Rob Wiblin interviewed Will MacAskill for the 80,000 Hours Podcast. Topics discussed include Will’s forthcoming book; ‘longtermism’ as a label; mental health; and "judicious ambition" (see above).

Luca Righetti and Fin Moorhouse interviewed Jason Crawford on progress studies for Hear This Idea. The interview includes a section on the links between progress studies and longtermism.

Fin Moorhouse also published an abbreviated version of his profile on space governance, which we summarized in our March 2022 issue.

Rumtin Sepasspour released a database of academic articles, reports and government submissions from 2016–2021 relating to existential and global catastrophic risk, categorised by policy relevance and risk category.

Will Bradshaw, Anjali Gopal and Michael McLaren announced the launch of the Nucleic Acid Observatory, an organization focused on protecting the world from catastrophic biothreats by detecting novel agents spreading in the human population or environment.

The Global Priorities Institute announced the 2022 Prizes in Global Priorities Research. The top prize was awarded to Jeffrey Sanford Russell’s paper, On two arguments for fanaticism.


Conversation with Ben Snodin

Ben Snodin is a Senior Researcher at Rethink Priorities. His current focuses are building a nanotechnology strategy research field within EA, and on ways to support mid-career people interested in moving into EA work. He was previously a Senior Research Scholar at the Future of Humanity Institute (FHI), and worked in finance as a quantitative analyst. He has a PhD in DNA nanotechnology from the University of Oxford.

Future Matters: You’ve spent some time working on nanotechnology strategy, and recently published a report on whether it’s a promising area of focus for longtermists. Could you tell us a bit about what nanotechnology strategy entails?

Ben Snodin: I guess the tagline is — working on making the development of advanced nanotechnology go well for the world. It’s kind of like AI safety, but for nanotechnology. Except maybe with a bit less emphasis on risk, since there are many good possible outcomes from advanced nanotechnology as well that could be worth thinking about; so it doesn’t have to be totally focused on risk.

Future Matters: And how did you start working in this area?

Ben Snodin: I joined the Research Scholars Programme at FHI two years ago, and my tentative initial plan was to do global priorities research. Then I had a careers think and this nanotech strategy thing was there as an extra option, because other people at FHI had been talking about it. And I realised I probably had a good comparative advantage there, partly because of my background and partly through being at FHI where lots of people were talking about it, especially at that time. And it’s pretty unexplored by EA people. I guess the first things I did were talking to the other people on the Research Scholars Program who’d been thinking about it a bit, and talking to Eric Drexler at FHI who kind of came up with the idea that we could one day have this kind of technology, and was the first one to say we should think about this. 

Future Matters: Could you outline what exactly people are talking about when they talk about ‘nanotechnology’ and ‘atomically-precise manufacturing’?

Ben Snodin: It’s still kind of vague, but the basic idea is that you have nano-scale machines doing lego with atoms and small molecules: joining together small molecules to make some structure. We kind of have examples of this from biology where nanoscale machines called ribosomes make proteins by joining amino acids together. In what I call the ‘advanced nanotechnology’ version, these machines need to be very stiff and high-performance in some sense, in a way that biological things are not. And then ‘atomically-precise manufacturing’ is a bit narrower than the thing that I call ‘advanced nanotechnology’. It’s more prescriptive: the machinery and products need to be mostly atomically-precise, for example; there’s this particular idea about doing hierarchical assembly as well, where the products of the smallest machines are used as building blocks for slightly bigger machines, and so on.

Future Matters: In Nick Bostrom’s 2002 paper introducing the concept of ‘existential risk’, nanotechnology gets more discussion than AI or biotechnology. In contemporary discussions, it doesn’t get nearly as much attention — e.g. The Precipice has just one page on nanotech. What do you think explains this shift in focus?

Ben Snodin: Interesting that you mention The Precipice. I circulated something on nanotech stuff in FHI about a year ago, and Toby Ord was like ‘it’s cool that you’re looking into this; I didn’t have much on it in The Precipice because the field is still quite speculative.’ I guess that in 2002 it was in the public consciousness a bit more; and there was lots of funding going into it and progress being announced. Obviously, I wasn’t there at the time. But the amount of progress [since then] has been very slow. Progress has been very slow and AI progress has been very fast, or seems to have been relatively surprisingly fast. So this is one story, and I feel like this has got to be a big part of the story. I don’t know if it’s the whole story, or if there’s also historical accidents, where a few key early people got more interested in AI. At the same time, I don’t think we should have as many people working on this stuff as AI. But it shouldn’t be zero.

Future Matters: In the popular imagination, worries about nanotech have centred around so-called ‘grey goo’ scenarios, in which self-replicating nanobots consume the biosphere and cause some sort of ecological catastrophe. Reading your report and more contemporary stuff, this doesn’t seem to be a particular focus— can you elaborate a bit on why that is?

Ben Snodin: So I think Eric Drexler wrote about ‘grey goo’ in Engines of Creation, which was his big thing saying: this technology could happen and let’s worry about it. I guess Eric’s definitely moved away from that. I feel like, to get grey goo to happen, you need really, really advanced technology; it seems like one of the last things you’d be able to do if you had this technology. Like, you’d start off with a not-great version which would still be really impactful; and then the grey goo just seems really, really hard. It seems way harder to make an autonomous, fully self-sufficient, able to replicate, nanoscale thing than to make a better computer than we can now, for example. And then also it seems hard to do it by accident, and it’s like — why would you want to make that? I mean, I think I mentioned it in the report because I think it’s possible; maybe someone would want to make that some time because there are crazy people. You know, we decided making nuclear weapons was a good idea. But yeah, it doesn’t seem that likely that people would make it. So to me it’s not the main thing I’d be worrying about. 

Future Matters: We don’t know much about what exactly this advanced nanotechnology will look like. When we’re trying to think about the arrival of APM, is it fair to just think of it as a dramatic improvement in our general manufacturing capabilities?

Ben Snodin: Yeah — I think the framing of ‘very powerful, very general manufacturing technology’ is probably the main one I rely on to think about the effects. You can add a bit more nuance because the core technology is nanoscale machines joining small molecules. For example, probably the first products are nanoscale things or things where you don’t need much material. But yeah, I think you can mostly think of this as a manufacturing technology, which is not at all obvious from the name.

Future Matters: We definitely didn’t find that obvious either. So then, when thinking about the effects of advanced nanotechnology, how different is this from just thinking about the effects of dramatic technical progress? 

Ben Snodin: I think I would add a bit more nuance, but this isn’t a terrible first approximation. It’s quite a similar question. Maybe it’s a similar process to try and answer that. Nuance you can add is obviously like — maybe we have particular reasons to think we get this particular thing a lot sooner than if you were just, e.g., forecasting economic growth and projecting how manufacturing should evolve. And then there are particular enabling technologies we’d have in mind for progress on advanced nanotechnology, and for advanced nanotechnology we probably expect to get certain products or capabilities earlier than others. There are things like that where having the particular technology in mind changes the story a bit. 

Future Matters: Advanced nanotechnology would provide enormous benefits, and you point to ways in which it could plausibly reduce existential risk, alongside ways it could pose a risk itself. How should we think in a nuanced way about the balance of good and bad effects?

Ben Snodin: It’s pretty hard. If I had to guess, I’d prefer it not to happen sooner. So I think we want to avoid speeding it up; because it’s easier to speed up than slow down, and because we have the option to wait. As for whether it would it have good or bad effects on balance, if it arrived in 2025: I’d guess overall it would be bad but it’s very unclear. I guess people would say like — maybe AI is sort of ‘do or die’ and if we survive AI then we’re good. So why add this extra risk by having this crazy thing happen before AI; better to have it after AI and then the AI can ensure it goes well or something.

Future Matters: Thanks Ben!


For general assistance, we thank Leonardo Picón. 

We’re offering a cash bounty of between $5 and $50, if you inform us of a substantial error in Future Matters. (Depending on how significant we judge it to be.)

  1. ^
  2. ^

    Readers interested in the opposing view might enjoy Nick Bostrom’s Fable of the Dragon-Tyrant.

  3. ^

    Also in biosecurity: US scientists call for stricter rules on risky gain-of-function research in Nature.

Comments
No comments on this post yet.
Be the first to respond.
More from Pablo
Curated and popular this week
Relevant opportunities