Hide table of contents

Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, “The Returns to Science in the Presence of Technological Risks.” The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income.

The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a “time of perils” commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction). In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year’s delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won’t end up having to go through the time of perils at all.

I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in the Existential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks.

What’s the upshot? From the report’s executive summary:

A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science. For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report’s model, the benefits of science will outweigh the costs. I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them).

On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred. These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates. The domain expert forecasts are high enough, for example, that if we think the future is “worth” more than 400 years of current social welfare, in one version of my model we would not want to accelerate science, because the health and income benefits would be outweighed by the increases in the remote but extremely bad possibility that new technology leads to the end of human civilization. However, if we accept the much lower forecasts of extinction risks from the superforecasters, then we would need to put very very high values on the long-run future of humanity to be averse to risking it.

Throughout the report I try to neutrally cover different sets of assumptions, but the report’s closing section details my personal views on how we should think about all this, and I thought I would end the post with those views (the following are my views, not necessarily Open Philanthropy’s).

My Take

I end up thinking that better/faster science is very unlikely to be bad on net. As explained in the final section of the report, this is mostly on the back of three rationales. First, for a few reasons I think lower estimates of existential risk from new biotechnology are probably closer to the mark than more pessimistic ones. Second, I think it’s plausible that dangerous biotech capabilities will be unlocked at some point in the future regardless of what happens to our scientific institutions (for example because they have already been discovered or because advances in AI from outside mainstream scientific institutions will enable them). Third, I think there are reasonable chances that better/faster science will reduce risks from new biotechnology in the long run, by discovering effective countermeasures faster. 

In my preferred model, investing in science has a social impact of 220x, as measured in Open Philanthropy’s framework. In other terms, investing a dollar in science has the same impact on aggregate utility as giving a dollar each to 220 different people earning $50,000/yr. With science, this benefit is realized by increasing a much larger set of people’s incomes by a very small but persistent amount, potentially for generations to come.

That said, while I think it is very unlikely that science is bad on net, I do not think it is so unlikely that these concerns can be dismissed. Moreover, even if the link between better/faster science and increased peril is weak and uncertain, the risks from increased peril are large enough to warrant their own independent concern. My preferred policy stance, in light of this, is to separately and in parallel pursue reforms that accelerate science and reforms that reduce risks from new technologies, without worrying too much about their interaction (with some likely rare exceptions).

It’s a big report (74 pages in the main report, 119 pages with appendices) and there’s a lot more in it that might be of interest to some people. For a more detailed synopsis, check out the executive summary, the table of contents, and the summary at the beginning of section 11. For some intuition about the quantitative magnitudes the model arrives at, section 3.0 has a useful parable. You can read the whole thing on arxiv.





More posts like this

Sorted by Click to highlight new comments since:

This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced. 

I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future). 

The report says: 

However, for this equation to equal to 432W, we would require merely that ρ = 0. 99526. In other words, we would need to discount utility flows like our own at 0.47% per year, to value such a future at 432 population years. This is higher than Davidson (2022), though still lower than the lowest rate recommended in Circular A-4. It suggests conservative, but not unheard of, valuations of the distant future would be necessary to prefer pausing science, if extinction imperiled our existence at rates implied by domain expert estimates.

At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.

This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.

I think many economists justify discount rates for more pragmatic reasons, including uncertainty over the future. Your hypothetical in which a civilization 10,000 years from now is given extremely little weight isn't necessarily a reductio in my opinion, since we know very little about what the world will be like in 10,000 years, or how our actions now could predictably change anything about the world 10,000 years from now. It is difficult to forecast even 10 years into the future. Forecasting 10,000 years into the future is in some sense "1000 times harder" than the 10 year forecast.

An exponential discount rate is simply one way of modeling "epistemic fog", such that things further from us in time are continuously more opaque and harder to see from our perspective.

Do economists actually use discount rates to account for uncertainty? My understanding was that we are discounting expected utilities, so uncertainty should be accounted for in those expected utilities themselves.

Maybe it’s easier to account for uncertainty via an increasing discount rate, but an exponential discount rate seems inappropriate. For starters I would think our degree of uncertainty would moderate over time (e.g. we may be a lot more uncertain about effects ten years from now than today, but I doubt we are much more uncertain about effects 1,000,010 years from now compared to 1,000,000 or even 500,000 years from now).

If you think that the risk of extinction in any year is a constant , then the risk of extinction by year is , so that makes it the only principled discount rate. If you think the risk of extinction is time-varying, then you should do something else. I imagine that a hyperbolic discount rate or something else would be fine, but I don't think it would change the results very much (you would just have another small number as the break-even discount rate).

I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.

The reason it seems reasonable to view the future 1,000,010 years as almost exactly as uncertain as 1,000,000 years is mostly myopia. To analogize, is the ground 1,000 miles west of me more or less uneven than the ground 10 miles west of me? Maybe, maybe not - but I have a better idea of what the near-surroundings are, so it seems more known. For the long term future, we don't have much confidence in our projections of either a million or a million an ten years, but it seems hard to understand why all the relevant uncertainties will simply go away, other than simply not being able to have any degree of resolution due to distance. (Unless we're extinct, in which case, yeah.)

I agree that in short-term contexts a discount rate can be a reasonable pragmatic choice to model things like epistemic uncertainty, but this seems to somewhat obviously fall apart on the scale of tens of thousands of years. If you introduce space travel and uploaded minds and a world where even traveling between different parts of your civilization might take hundreds of years, you of course have much better bounds on how your actions might influence the future.

I think something like a decaying exponential wouldn't seem crazy to me, where you do something like 1% for the next few years, and then 0.1% for the next few hundred years, and then 0.01% for the next few thousand years, etc. But anything that is assumed to stay exponential when modeling the distant future seems like it doesn't survive sanity-checks.

Edit: To clarify more: This bites particularly much when dealing with extinction risks. The whole point of talking about extinction is that we have an event which we are very confident will have very long lasting effects on the degree to which our values are fulfilled. If humanity goes extinct, it seems like we can be reasonably confident (though not totally confident) that this will imply a large reduction in human welfare billions of years into the future (since there are no humans around anymore). So especially in the context of extinction risk, an exponential discount rate seems inappropriate to model the relevant epistemic uncertainty.

Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn't run into the same absurd "value an extra meal in 10,000 years more than a thriving civilization in 20,000 years" problems of exponential discounting.

Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.

Perhaps worth noting that very long term discounting is even more obviously wrong because of light-speed limits and the mass available to us that limits long term available wealth - at which point discounting should be based on polynomial growth (cubic) rather than exponential growth. And around 100,000-200,000 years, it gets far worse, once we've saturated the Milky Way.

Commenting more, this report also says: 

Using the Superforecaster estimates, we need the value of all future utility outside the current epistemic regime to be equivalent to many tens of thousands of years at current consumption and population levels, specifically 66,500-178,000 population years. With domain experts we obtain much lower estimates. Given implied extinction risks, we would prefer to pause science if future utility is roughly equivalent to 400-1000 years of current population-years utility.

I don't really know why the author thinks that 100,000x is a difficult threshold to hit for the value of future civilization. My guess is this must be a result of exponential discount rate, but assuming any kind of space colonization (which my guess is expert estimates of the kind the author puts a lot of weight on would put at least in the tens of percent likely in the next few thousand years), it seems almost inevitable that human population size will grow to at least 100x-10,000x its present size. You only need to believe in 10-100 years of that kind of future to reach the higher thresholds of valuing the future at ~100,000x current population levels. 

And of course in-expectation, taking the average across many futures and taking into account the heavy right tail, as many thinkers have written about, there might very well be more than 10^30 humans alive, dominating many expected value estimates here, and easily crushing the threshold of 100,000x present population value. 

To be clear, I am not particularly in-favor of halting science, but I find the reasoning in this report not very compelling for that conclusion.

To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn't change much in value just because there are more people. (Not that I think either view is obviously wrong - but it is so generally assumed in EA that it's often unnoticed, but it's very much not a widely shared view among philosophers or the public.)

Matthew is right that uncertainty over the future is the main justification for discount rates, but another principled reason to discount the future is that future humans will be significantly richer and better off than we are, so if marginal utility is diminishing, then resources are better allocated to us than to them. This classically gives you a discount rate of where is the applied discount rate, is a rate of pure time preference that you argue should be zero, is the growth rate of income, and determines how steeply marginal utility declines with income. So even if you have no ethical discount rate (), you would still end up with . Most discount rates are loaded on the growth adjustment () and not the ethical discount rate () so I don't think longtermism really bites against having a discount rate. [EDIT: this is wrong, see Jack’s comment]

Also, am I missing something, or would a zero discount rate make this analysis impossible? The future utility with and without science is "infinite" (the sum of utilities diverges unless you have a discount rate) so how can you work without a discount rate?

Matthew is right that uncertainty over the future is the main justification for discount rates

I don't think this is true if we're talking about Ramsey discounting. Discounting for public policy: A survey and Ramsey and Intergenerational Welfare Economics don't seem to indicate this. 

Also, am I missing something, or would a zero discount rate make this analysis impossible?

I don't think anyone is suggesting a zero discount rate? Worth noting though that that former paper I linked to discusses a generally accepted argument that the discount rate should fall over time to its lowest possible value (Weitzman’s argument).

Most discount rates are loaded on the growth adjustment () and not the ethical discount rate () so I don't think longtermism really bites against having a discount rate.

The growth adjustment term is only relevant if we're talking about increasing the wealth of future people, not when we're talking about saving them from extinction. To quote Toby Ord in the Precipice:

"The entire justification of the growth adjustment term term is to adjust for marginal benefits that are worth less to you when you are richer (such as money or things money can easily buy), but that is inapplicable here—if anything, the richer people might be, the more they would benefit from avoiding ruin or oblivion. Put another way, the ηg term is applicable only when discounting monetary benefits, but here we are considering discounting wellbeing (or utility) itself. So the ηg term should be treated as zero, leaving us with a social discount rate equal to δ."

Yes, Ramsey discounting focuses on higher incomes of people in the future, which is the part I focused on. I probably shouldn't have said "main", but I meant that uncertainty over the future seems like the first order concern to me(and Ramsey ignores it).

Habryka's comment:

applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).

seems to be arguing for a zero discount rate.

Good point that growth-adjusted discounting doesn’t apply here, my main claim was incorrect.

Long run growth rates cannot be exponential. This is easy to prove. Even mild steady exponential growth rates would quickly exhaust all available matter and energy in the universe within a few million years (see Holden's post "This can't go on" for more details).

So a model that tries adjust for marginal utility of resources should also quickly switch towards something other than assumed exponential growth within a few thousand years.

Separately, the expected lifetime of the universe is finite, as is the space we can affect, so I don't see why you need discount rates (see a bunch of Bostrom's work for how much life the energy in the reachable universe can support).

But even if things were infinite, then the right response isn't to discount the future completely within a few thousand years just because we don't know how to deal with infinite ethics. The choice of exponential discount rates in time does not strike me as very principled in the face of the ethical problems we would be facing in that case.

At this discount rate, you would value a civilization that lives 10,000 years in the future, which is a real choice that past humans faced, at less than a billion billion times of their civilization at the time.

What choice are you thinking of?

I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).

I'm not sure that choice is a real one humanity actually faced though. It seems unlikely that humans alive 10,000 years ago actually had the capability to commit omnicide, still less the ability to avert future omnicide for the cost of lunch. It's not a strong reductio ad absurdum because it implies a level of epistemic certainty that didn't and doesn't exist.

The closest ancient-world analogue is humans presented with entirely false choices to sacrifice their lunch to long-forgotten deities to preserve the future of humanity. Factoring in the possible existence of billions of humans 10,000 years into the future wouldn't have allowed them to make decisions that better ensured our survival, so I have absolutely no qualms with those who discounted the value of our survival low enough to decline to proffer their lunch.

Even if humanity 10000 years ago had been acting on good information (perhaps a time traveller from this century warned them that cultivating grasses would set them on a path towards civilization capable of omnicide) rather than avoiding a Pascal-mugging, it's far from clear that humanity deciding to go hungry to prevent the evils of civilization from harming billions of future humans would [i] not have ended up discovering the scientific method and founding civilizations capable of splitting atoms and engineering pathogens a bit later on anyway [ii] have ended up with as many happy humans if their cultural taboos against civilization had somehow persisted. So I'm unconvinced of a moral imperative to change course even with that foreknowledge. We don't have comparable foreknowledge of any course the next 10000y could take, and our knowledge of actual and potential existential threats gives us more reason to discount the potential big expansive future even if we act now, especially if the proposed risk-mitigation is as untenable and unsustainable as "end science".

If humanity ever reached the stage where we could meaningfully trade inconsequential things for cataclysms that only affect people in the far future [with high certainty], that might be time to revisit the discount rate, but it's supposed to reflect our current epistemic uncertainty.

Thanks a bunch for this report! I haven't had the time to read it very carefully, but I've already really enjoyed it and am curating the post. 

I'm also sharing some questions I have, my highlights, and my rough understanding of the basic model setup (pulled from my notes as I was skimming the report). 

A couple of questions / follow-up discussions

  1. I'm curious about why you chose to focus specifically on biological risks. 
    1. I expect that it's usually good to narrow the scope of reports like this and you do outline the scope at the beginning,[1] but I'd be interested in hearing more about why you didn't, for instance, focus on risks from AI. (I guess  
    2. (For context, in the XPT, risks from AI are believed to be higher than risks from engineered pathogens.)
  2. I'd be interested in a follow-up discussion[2] on this: "My preferred policy stance ... is to separately and in parallel pursue reforms that accelerate science and reforms that reduce risks from new technologies, without worrying too much about their interaction (with some likely rare exceptions)." 
    1. In particular, I mostly like the proposal, but have some worries. It makes sense to me that it's generally good to pursue different goals separately[3] But sometimes[4] it does turn out that at-first-seemingly-unrelated side-considerations (predictably) swamp the original consideration. Your report is an update against this being the case for scientific progress and biological risk, but I haven't tried to estimate what your models (and XPT forecasts I suppose) would predict for AI risk. 
    2. I also have the intuition that there's some kind of ~collaborativeness consideration like: if you have goals A and B (and, unlike in this report, you don't have an agreed-on exchange rate between them), then you should decide to pursue A and B separately only if the difference in outcomes from A's perspective between B-optimized actions and A-and-B-compromise actions is comparable to or smaller than the outcomes of A-optimized actions. 
      1. To use an example: if I want to have a career that involves traveling while also minimizing my contribution to CO2 levels or something, then I should probably just fly and donate to clean tech or verified carbon offsets or something, because even from the POV of CO2 levels that's better. But if it turns out that flying does more damage than the change I can make by actively improving CO2 levels, then maybe I should find a way to travel less or err on the side of trains or the like. (I think you can take this too far, but maybe there's a reasonable ground here.)
      2. More specifically I'm wondering if we can estimate the impact of AI/biosafety interventions, compared to possible harms. 
  3. I'm somewhat uncertain about why you model the (biological) time of perils (ToP) the way you do (same with the impact of "pausing science" on ToP). 
    1. I was initially most confused about why only the start date of the ToP moves back because of a science-pause, and assumed it was either because you assumed that ToP was indefinite or because it would end for a reason not very affected by the rate of scientific progress. Based on the discussion in one of the sections, I think that's not quite right? (You also explore the possibility of ToP contracting due to safety-boosting scientific advances, which also seems to contradict my earlier interpretation.) 
    2. This also led to me wondering what would happen if the risk grew over the course of ToP  a growing risk (d), as opposed to going from 0 to d (e.g. there's some chance of a new unlocked dangerous biotechnology at any point once ToP starts) — and how that would affect the results? (Maybe you do something like this somewhere!) 

Some things that were highlights to me

  1. In the comparison of superforecaster and expert answers in XPT, the "Correlated pessimism" consideration was particularly interesting and more compelling than I expected it to be before I read it! 
    1. "...general pessimism and optimism among groups in ways that imply biases. [....] We also see a high degree of correlation in beliefs about catastrophic risk even for categories that seem likely to be uncorrelated. For example, [for the probability that non-anthropogenic causes (e.g. asteroids) cause catastrophes] the third of respondents most concerned about AI risk [...] foresaw a 0.14% chance of such a catastrophe by 2100. The third of respondent least concerned foresaw a 0.01% chance, more than an order of magnitude less. [...]" Then there's discussion of selection effects — pessimists about catastrophic risks might become experts in the field — and an argument that overly optimistic superforecasters would get corrective feedback that too-pessimistic risk experts might lack.
  2. Also in the XPT discussions (S 4.1), I liked the extrapolation for the current/future ~engineered pandemic peril rates and for a date for the onset of the time of perils.[5]
  3. I thought the "leveling the playing field" consideration was useful and something I probably haven't considered enough, particularly in bio: "... the faster is scientific progress, the greater is the set of defensive capabilities, relative to offensive ones. Conversely, a slowdown in the rate of scientific progress (which is arguably underway!) reduces safety by “leveling the playing field” between large and small organizations." (Related: multipolar AI scenarios)
  4. Factoids/estimates I thought were interesting independent of the rest of the report: 
    1. 56% of life expectancy increases are attributed to science effects in this paper (see S4.8), which, together with average increases in life expectancy means that a year of science increases our lifespans by around 0.261% (i.e. multiply by ~1.0026 every year). (For the impacts of science on utility via increased incomes, the paper estimates a per-year increase of 0.125%.)
    2. "In 2020, for example, roughly 1 in 20-25 papers published was related to Covid-19 in some way (from essentially none in 2019)."
    3. You conclude that: "Roughly half the value comes from technologies that make these people just a tiny bit richer, in every year, for a long time to come. The other half comes from technologies that make people just a tiny bit healthier, again in every year, for a long time to come." I don't know if I would have expected the model to predict such an even split.
    4. "Omberg and Tabarrok (2022) [examine] the efficacy of different methods of preparing for a biocatastrophe; specifically, the covid-19 pandemic. The study takes as its starting point the Global Health Security Index, which was completed in 2019, shortly before the onset of the pandemic. This index was designed by a large panel of experts to rate countries on their capacity to prevent and mitigate epidemics and pandemics. Omberg and Tabarrok examine how the index, and various sub-indices of it, are or are not correlated with various metrics of success in the covid-19 pandemic, mostly excess deaths per capita. The main conclusion is that almost none of the indices were correlated with covid-19 responses, whether those metrics related to disease prevention, detection, response, or the capacity of the health system."

My rough understanding of the basic setup of the main(?) model

(Please let me know if you see an error! I didn't check this carefully.)

  1. Broad notes:
    1. "Time of (biological) perils" is a distinct period, which starts when the annual probability a biocatastrophe jumps.
    2. The primary benefits of science that are considered are income and health boosts (more income means more utility per person per year, better health means lower mortality rates). 
    3. I think the quality of science that happens or doesn't happen at different times is assumed to be ~always average.
  2. The "more sophisticated model" described in S3.1 compares total utility (across everyone on earth, from now until eternity) under two scenarios: "status quo" and a year-long science "pause."
  3. Future utility is discounted relative to present/near-future utility, mostly for epistemic reasons (the further out we're looking, the more uncertainty we have about what the outcomes will be). This is modeled by assuming a constant annual probability that the world totally stops being predictable (it enters a "new epistemic regime"); utility past that point does not depend on whether we accelerate science or not and can be set aside for the purpose of this comparison— see S3.2 and 4.2[6]
  4. Here's how outcomes differ in the two scenarios, given that context:
    1. In the status quo scenario, current trends continue (this is the baseline).
    2. In the "pause science" scenario, science is "turned off" for a year, which has a delayed impact by:
      1. Delaying the start of the time of perils by a year (without affecting the end-date of the time of perils, if there is one)
      2. Slowing growth for the duration of a year starting time T
        1. I.e. after the pause things continue normally for a while, then slow down for a year, and then go back up after the year has passed. In the model, pausing science slows the decline in mortality for a year but doesn't affect birth rates. Given some assumptions, this means that pausing science pushes the decline in mortality permanently behind where it should be, so population growth rates slow (forever, with the discount rate p as a caveat) after the effects kick in. 
        2. Note that T is also taken to be the time at which the time of perils would start by default — there's a section discussing what happens if the time of perils starts earlier or later than T that argues that we should expect pausing science to look worse in either scenario.
  1. ^

    S 2.2. "Synthetic biology is not the only technology with the potential to destroy humanity - a short list could also include nuclear weapons, nanotechnology, and geoengineering. But synthetic biology appears to be the most salient at the moment. [...]"

  2. ^

    Although I might not be able to respond much/fast in the near future

  3. ^

    Most actions targeted at some goal A affect a separate goal B way less than an action that was taken because it targeted goal B would have affected B. (I think this effect is probably stronger if you filter by "top 10% most effective actions targeted at these goals, assuming we believe that there are huge differences in impact.) If you want to spend 100 resource units for goals A and B you should probably just split the resources and target the two things separately instead of trying to find things that look fine for both A and B. 

    (I think the "barbell strategy" is a related concept, although I haven't read much about it.)

  4. ^

    (for some reason the thing that comes to mind is this SSC post about marijuana legalization from 2014 — I haven't read it in forever but remember it striking a chord)

  5. ^

    Seems like around the year 2038, the superforecasters expect a doubling of annual mortality rates from engineered pandemics from 0.0021% to 0.0041% — around 1 COVID every 48 years — and a shift from ~0% to ~0.0002%/year extinction risk. The increases are assumed to persist (although there were only forecasts until 2100?).

  6. ^

    4.2 is a cool collection of different approaches to identifying a discount rate. Ultimately the author assumes p=0.98, which is on the slightly lower end and which he flags will put more weight on near-term events.

    I think p can also be understood to incorporate a kind of potential "washout" aspect of scientific progress today (if we don't discover what we would have in 2024, maybe we still mostly catch up in the next few years), although I haven't thought carefully about it. 

Nice research!

In my preferred model, investing in science has a social impact of 220x, as measured in Open Philanthropy’s framework.

How does this compare to other top causes that Open Phil funds? Is there a summary somewhere?

My impression is that it's worse. OP's GHW (non-GCR) resources historically used the 1,000x bar as the bar to clear, and the linked blog post implies that it's gone up over time.

Executive summary: The report develops a framework to weigh the historical benefits of science against potential future harms from misuse of advanced biotechnology. It concludes that faster science seems very unlikely to be bad overall based on lower risk estimates, the inevitability of such technologies emerging, and science's potential to reduce long-term risks.

Key points:

  1. The report tallies benefits of science against future costs like civilization-ending events enabled by technological advances. It calibrates risks using forecasts from the Existential Risk Persuasion tournament.
  2. For non-catastrophic harms, historic benefits outweigh forecasted risks. However, for civilization-ending events, desirability of accelerating science depends on which risk estimates are used and value placed on the long-term future.
  3. The author's view is that faster science is very unlikely to be net negative due to lower existential risk estimates being more credible, advanced biotech likely emerging regardless, and science's potential to reduce risks by enabling countermeasures.
  4. Though unlikely, risks warrant concern and justify pursuing both scientific acceleration and technology risk reduction in parallel rather than worrying about their interaction.
  5. Under the author's preferred assumptions, the model estimates investing in science has a social impact 220x that of direct cash transfers, realized by raising incomes slightly for many generations.
  6. While science acceleration seems desirable, the report argues existential risks themselves warrant separate concern and scrutiny.



This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Interesting Did you try looking at animal welfare when calculating for the overall utility of science? I except this could change substantially your results, especially when including the many ways technological progress has been used to boost factory farming (eg genetic sélection).

See this post on the topic "Global welfare may be net negative and declining" https://forum.effectivealtruism.org/posts/HDFxQwMwPp275J87r/net-global-welfare-may-be-negative-and-declining-1

Curated and popular this week
Relevant opportunities