Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

In previous posts, I have:

  • Laid out the view that in general, further economic development and general human empowerment are likely to be substantially net positive, and are likely to lead to improvement on many dimensions in unexpected ways.
  • Listed possible global catastrophic risks that provide a potential counterpoint to this view, while also noting “global upside possibilities” in which progress could lead to a future that is far brighter than the present.

This post attempts to lay out my reasons for thinking that speeding the pace of global development and empowerment should be thought of as increasing humanity’s odds of an extremely bright future, relative to its odds of a future that is worse than the present. Note that

  • I focus here on slightly to moderately speeding or slowing the pace of global development and empowerment relative to what it is today; this takes for granted that we can expect to see substantial development and empowerment in our future, and simply asks whether it is desirable that this development/empowerment happen more quickly or more slowly.
  • I focus on the odds of an extremely bright future relative to the odds of a future that is worse than the present. This means that I’m not only considering the contribution of empowerment and development to catastrophic risk; I’m also considering their contribution to “global upside possibilities.”

1. Some catastrophic risks seem clearly reduced, and not exacerbated, by technological/economic progress. These include “non-anthropogenic” risks, such as asteroids, supervolcanoes, and non-engineered pandemics. Development may give us better tools for anticipating and responding to these risks, and is unlikely to make them worse. In addition, risks like #4 and #5 from the previous post on this topic - which involve risks of slowing growth due to shortage of a particular resource, or a slowdown in innovation - seem clearly mitigated by a faster pace of development.

2. Even for the catastrophic risks that seem exacerbated by development, I believe that faster development is likely safer than slower development (or, at worst, the net effect is highly ambiguous). This belief is based on the previously articulated concept of “global upside possibilities” - the belief that sufficient development may make the world not only better, but less at risk for major disruption by global catastrophe. If one accepts this view, it follows that faster overall development would mean less time between (a) the emergence of a given danger and (b) other developments that dramatically reduce risks. For example, faster development may bring the day closer when a highly dangerous synthetic pandemic can be designed, but it will also bring the day closer when we have the technologies and resources to manage such a risk (as well as potentially speeding the improvement of decision-making abilities and mental health worldwide, improving the capabilities of those who would mitigate such a risk and reducing the number of people who would contribute to it). Likewise, faster development may lead to higher carbon emissions, but is also likely to lead to better progress on alternative energy sources, more resources for adaptation mechanisms (much of the impact of climate change depends on these resources), and generally an environment more favorable to investing in climate change prevention.

There are certainly limitations to this reasoning. For one thing, it addresses “general” economic/technological development; the point remains that empowering people and developing technologies that are particularly likely to exacerbate risks can increase net risk, and that for any given risk there are particular kinds of growth that are more and less problematic in terms of that risk. (For example, the ideal scenario for dealing with climate change is one in which we see strong growth but also reduce carbon emissions.)

In addition, if there is a particular risk that has been clearly identified before it is yet technologically possible, and there is a promising plan for averting such a risk, it could be safer to experience slower development while the promising plan is executed. However, I know of no compelling examples of such dynamics today. (And in general, it is likely to be much easier to design a plan for responding to a risk when the risk is real and concrete rather than hypothetical.)

3. I believe that a large proportion of the risk of global catastrophe comes from the category of “risks that remain unarticulated and unimagined.” I don’t believe the list we made previously - or any list that can be constructed with today’s available information - is close to comprehensive: I expect that many of the most threatening risks are simply outside what we are able to anticipate today.

I would guess that some such risks become nearer as economic/technological development progresses, while some do not. But in all cases, I believe that economic/technological development is likely to improve our resources for anticipating, preventing and adapting to global catastrophes, and that for the reasons articulated above, faster development is more likely to reduce the lag between the emergence of risks and responses to them (including “global upside possibilities” that dramatically reduce risks).

4. A key part of my view is the belief that there are few outstanding cases in which it is clear that very particular actions need to be taken to avert particular risks. If there were a more compelling set of cases in which the right course of action were known, I would be more likely to believe that “slowing development until the right course of action can play out reduces risks, and generically speeding development increases them.” But as it is, I don’t see such clear-cut cases. The cases in which the necessary actions are clearest to me are those of asteroids (which I think is a clear-cut case in which development reduces risks) and climate change (which I see as highly ambiguous regarding the question of whether faster development is desirable, as discussed above). Thus, I don’t see a strong case for safety benefits to slower development.

I remain highly open to the possibility that particular risks represent excellent giving opportunities, and that focusing on them may do more good than simply focusing on increasing development and empowerment. But I am not aware of what I consider a strong case for believing that development in general increases the odds of a badly disrupted future relative to an extremely bright one, and I believe there are strong reasons to believe that development improves our prospects on net.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel