I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.
The road trip metaphor
Let me set up a metaphor to frame the issue:
Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:
- XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
- PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary.
(See also @Max_Daniel's recent post)
Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers).
(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)
How does XR weigh costs and benefits?
Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.
For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.
Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.”
Does XR consider tech progress default-good or default-bad?
My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.
When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).
What would moral/social progress actually look like?
This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet?
Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1).
What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?
Returning to the road trip metaphor: while PS and EA/XR debate the ideal balance of resources towards steering vs. acceleration, and which is more neglected, there are other passengers in the car. Many are yelling to just slow down, and some are even saying to turn around and go backwards. A few, full of revolutionary zeal, are trying to jump up and seize the steering wheel in order to accomplish this, while others are trying to sabotage the car itself. Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.
This seems like a problem to me, especially in the context of (3): I don't know how we make social progress, when this is what we have to work with. So a big part of progress studies is trying to just educate more people that the car is valuable and that forward is actually where we want to go. (But I don't think anyone in EA/XR sees it this way or is sympathetic to this line of reasoning, if only because I've never heard them discuss this faction of humanity at all or recognize it as a problem.)
Thank you all for your input here! I hope that understanding these issues better will help me finally answer @Benjamin_Todd's question, which I am long overdue on addressing.
** to be clear, you don't have to have this view to be a longtermist or an EA, but I do think this is much more common among the modal longtermist EA than the modal Progress studies fan.
I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.
Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction - you can cover much more ground in expectation if you make sure you're not headed into a crash first.
This typically (but not necessarily, cf.) implies longtermism. 'Global catastrophic risk', as a longtermist term of art, plausibly excludes the vast majority of things common sense would call 'global catastrophes'. E.g.:
My impression is a 'century more poverty' probably isn't a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn't globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination.
This makes it's continued existence no less an outrage to human condition. But, across the scales from threats to humankind's entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there's any direct cross-purposes in activity) the currency of XR reduction has much greater value.
Per discussion, there are a variety of ways the story sketched above could be wrong:
I don't see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough.
It's not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn't endorse them, it doesn't have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).
Buying the technological progress index?
Granting the story sketched above, there's not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.
Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I'd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less 'generalized techno-optimism'.
But I'd guess the majority of the action is around the 'modal XR account' of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. "Technocircumspection" seems a fairly sound corollary from this set of controversial conjuncts.
I wouldn't agree that this is a Pascal's Mugging. In fact, in a comment on the post you quote, Eliezer says:
I usually think of Pascal's Mugging as centrally about cases where you have a tiny probability of affecting the world in a huge way. In contrast, your example seems to be about trading off between uncertain large-sized effects and certain medium-sized effects. ("Medium" is only meant to be relative to "large", obviously both effects are huge on some absolute scale.)
Perhaps your point is that XR can only make a tiny, tiny dent in the probability of extinction; I think most XR folks would have one of two responses:
The other three questions you mention don't feel cruxy.
The second one (default-good vs. default-bad) doesn't really make sense to me -- I'd say something like "progress tends to increase our scope of action, which can lead to major improvements in quality of life, and also increases the size of possible risks (especially from misuse)".
As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!
I kinda sorta answered Q2 above (I don't really have anything to add to it).
Q3: I'm not too clear on this myself. I'm just an object-level AI alignment researcher :P
Q4: I broadly agree this is a problem, though I think this:
seems pretty unlikely to me, where I'm interpreting it as "civilization stops making any progress and regresses to the lower quality of life from the past, and this is a permanent effect".
I haven't thought about it much, but my immediate reaction is that it seems a lot harder to influence the world in a good way through the public, and so other actions seem better. That being said, you could search for "raising the sanity waterline" (probably more so on LessWrong than here) for some discussion of approaches to this sort of social progress (though it isn't about educating people about the value of progress in particular).
I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what's the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that's not actually the proposal from any serious EA/XR folks? I am still unclear on this.
I'd suggest that this is a failure of imagination (sorry, I'm really not trying to criticise you, but I can't find another phrase that captures my meaning!)
Like let's just take it for granted that we aren't going to be able to make any real research progress until we're much closer to AGI. It still seems like there are several useful things we could be doing:
• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren't like "That's stupid, why would you care about that!". Note that views tend to change generationally, so you need to start here early.
• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.
• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.
• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.
• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.
We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.
(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).
Sure. I think most longtermists wouldn't endorse this (though a small minority probably would).
I don't think this is negative, I think there are better opportunities to affect the future (along the lines of Ben's comment).
I think this is mostly true of other EA / XR folks as well (or at least, if they think it is negative, they aren't confident enough in it to actually say "please stop progress in general"). As I mentioned above, people (including me) might say it is negative in specific areas, such as AGI development, but not more broadly.
I agree with that (and I think most others would too).
OK, so maybe there are a few potential attitudes towards progress studies:
I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)… I think there are valuable things to do here, and I'm glad people are doing them, but I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.
(But I don't think that's all of it.)
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world's population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged.
With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).
For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).
Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.
I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, inhabit and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I'd still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.
I wouldn't support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).
That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it's important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.
I also lend my assistance to some neartermist EAresearch focused on growth, in areas that don't very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.
That's what I would say.
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.
So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don't think the same people would do it for e.g. trillions of dollars. (I don't know where in the middle it changes.)
I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not "fully" XR-focused, see e.g. Worldview Diversification.)
There's a variant of attitude (1) which I think is worth pointing out:
Some arguments for (1b):
Cool to see this thread!
Just a very quick comment on this:
I don't think anyone is proposing this. The debate I'm interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources).
The main claim isn't that speeding up tech progress is bad,* just that it's not the top priority at the margin vs. reducing x-risk or speeding up moral progress.**
One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk.
If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking.
In Toby's book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today.
*Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it's net good.
**This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.
[Likely not a crux]
EA often uses an Importance - Neglectedness - Tractability framework for cause prioritization. I would expect things producing progress to be somewhat less neglected than working on XR; it is still somewhat possible to capture some of the benefits.
We do indeed see vast amounts of time and money being spent on research and development, in comparison to the amount being spent on XR concerns. Possibly you'd prefer to compare with PS itself, rather than with all R&D? (a) I'm not sure how justified that is; (b) it still feels to me like it ought to be possible to capture some of the benefits from many of PS's proposed changes; (c) my weak impression is that PS (or things similar to PS- meta-improvements to progress) is still less neglected, and in particular that lots of people who don't explicitly identify as being part of PS are still working on related concerns.
"EA/XR" is a rather confusing term. Which do you want to talk about, EA or x-risk studies?
It is a mistake to consider EA and progress studies as equivalent or mutually exclusive. Progress studies is strictly an academic discipline. EA involves building a movement and making sacrifices for the sake of others. And progress studies can be a part of that, like x-risk.
Some people in EA who focus on x-risk may have differences of opinion with those in the field of progress studies.
First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.
I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.
OK, sorry for misunderstanding.
I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I'm not fully confident in it. But the crux could be what I highlight - whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future.
I would still ceteris paribus pick more growth rather than less, and from what I've seen of Progress Studies researchers, I trust them to know how to do that well.
It's important to compare with long-term political and social change too. Arguably a higher priority than either effort, but also something that can be indirectly served by economic progress. One thing the progress studies discourse has persuaded me of is that there is some social and political malaise that arises when society stops growing. Healthy politics may require fast nonstop growth (though that is a worrying thing if true).
To be honest this parsing of these two communities that have a ton in common reminds me of this great scene from Monty Python's Life of Brian:
I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:
As Bostrom wrote in 2003: "In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development."
"However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years." https://www.nickbostrom.com/astronomical/waste.html
With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html
Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal's Mugging I was talking about.
(Also for what it's worth, I think I'm more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)
Good to hear!
In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.
In practice, I think the problem with this kind of Pascal Mugging argument is that it's really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say "please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X", they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.
Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.
Thanks for writing this post! I'm a fan of your work and am excited for this discussion.
Here's how I think about costs vs benefits:
I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.
I think I can have a tractable impact on reducing that vulnerability. It doesn't seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out — what is my impact on x-risk, what is my impact on progress, how likely am I to have those impacts, etc. But that's the structure of how I think about it.
After initially worrying about pascal's mugging, I've come to believe that x-risk is in fact substantially more likely than 1 in several million, and whatever objections I might have to pascal's mugging don't really apply.
How I think about tech progress:
From an x-risk perspective, I'm pretty ambivalent about tech progress. I've heard arguments that it's good, and that it's bad, but mostly I think it's not a very predictably-large effect on the margin.
But while I care a lot about x-risk reduction, I have different world-views that I put substantial credence in as well. And basically all of those other world-views care a whole lot about human progress. So while I don't view human progress as the cause of my life the way I do x-risk reduction, I'm strongly in favor of more of it.
Finally, as you can imagine from my last answer, I definitely have a lot of conversations where I try to convey my optimism about technology's ability to make lives better. And I think that's pretty common — your blog is well-read in my circles.
Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).
By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes."
Is your issue more along the following?
If so, I would reject 2, because I believe we shouldn't try to quantify things at those levels of precision. This does get us to your question "How does XR weigh costs and benefits?", which I think is a good question to which I don't have a great answer to. It would be something along the lines of "there's a grey area where I don't know how to make those tradeoffs, but most things do not fall into the grey area so I'm not worrying too much about this. If I wouldn't fund something that supposedly reduces x-risk, it's either that I think it might increase x-risk, or because I think there are better options available for me to fund". Do you believe that many more choices fall into that grey area?
Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.
Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the GP that will produce that value.
It might be reasonable to assume that Global Project quality is normally distributed. One point of possible difference is the center of that distribution. Are most Global Projects of bad quality, neutral, or good quality?
We might make a further assumption that the expected value of a Global Project follows a power law, such that projects of extremely low or high quality produce exponentially more value (or more harm). Perhaps, if Q is project quality and V is value, V=QN. But we might disagree on the details of this power law.
One possibility is that in fact, it's easier to destroy the world than to improve the world. We might model this with two power laws, one for Q > 0 and one for Q < 0, like so:
In this case, whether or not progress is good will depend on the details of our assumptions about both the project quality distribution and the power law for expected value:
In this case, whether or not average expected value across many simulations of such a model is positive or negative can hinge on small alterations of the variables. For example, if we set N = 7 for bad projects and N = 3 for good projects, but we assume that the average project quality is +0.6 standard deviations from zero, then average expected value is mildly negative. At project quality +0.7 standard deviations from zero, the average expected value is mildly positive.
Here's what an X-risk "we should slow down" perspective might look like. Each plotted point is a simulated "world." In this case, the simulation produces negative average EV across simulated worlds.
And here is a Progress Studies "we should speed up" perspective might look like, with positive average EV.
The joke is that it's really hard to tell these two simulations apart. In fact, I generated the second graph by altering the center point of the project quality distribution 0.01 standard deviations to the right relative to the first graph. In both case, a lot of the expected value is lost to a few worlds in which things go cataclysmically wrong.
One way to approach a double crux would be for adherents of the two sides to specify, in the spirit of "if it's worth doing, it's worth doing with made up statistics," their assumptions about the power law and project quality distribution, then argue about that. Realistically, though, I think both sides understand that we don't have any realistic way of saying what those numbers ought to be. Since the details matter on this question, it seems to me that it would be valuable to find common ground.
For example, I'm sure that PS advocates would agree that there are some targeted risk-reduction efforts that might be good investments, along with a larger class of progress-stimulating interventions. Likewise, I'm sure that XR advocates would agree that there are some targeted tech-stimulus projects that might be X-risk "security factors." Maybe the conversation doesn't need to be about whether "more progress" or "less progress" is desirable, but about the technical details of how we can manage risk while stimulating growth.
So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):
On Max Daniel's thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.
I think it's more like:
Regarding your question:
Leopold Aschenbrenner's paper Existential risk and growth provides one interesting perspective on this question (note that while I find the paper informative, I don't think it settles the question).
A key question the paper seeks to address is this:
The paper's (preliminary) conclusion is
Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.
The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.
Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there's a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).
On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it's a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you're supposed to be thinking about both and not trying to oversimplify things.
This seems like a good place to mention Dath Ilan, Eliezer's fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.
I don't think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn't granular enough. There's a huge gulf between saying "social media is toxic" and saying "it is toxic for the closest thing to a downvote button to be reply/share", and I try to tune out/unfollow the people whose writings say things closer to the former.
You mention that some EAs oppose progress / think that it is bad. I might be wrong, but I think these people only "oppose" progress insofar as they think x-risk reduction from safety-based investment is even better value on the margin. So it's not that they think progress is bad in itself, it's just that they think that speeding up progress incurs a very large opportunity cost. Bostrom's 2003 paper outlines the general reasoning why many EAs think x-risk reduction is more important than quick technological development.
Also, I think most EAs interested in x-risk reduction would say that they're not really in a Pascal's mugging as the reduction in probability of an existential catastrophe occurring that can be achieved isn't astronomically small. This is partly because x-risk reduction is so neglected that there's still a lot of low-hanging fruit.
I'm not super certain on either of the points above but it's the sense I've gotten from the community.
(Context note: I read this post, all the comments, then Ben Todd's question on your AMA, then your Progress Studies as Moral Imperative post. I don't really know anything about Progress Studies besides this context, but will offer my thoughts now below in the hope it will help with identifying the crux.)
None of the comments so far have engaged with your road trip metaphor, so I'll bite:
In your Progress Studies as Moral Imperative post it sounds like you're concerned that humanity might just slow the car down, stop, and just stay there indefinitely or something due to a lack of appreciation or respect for progress. Is that right?
Personally I think that sounds very unlikely and I don't feel concerned at all about that. I think nearly all other longtermists would probably agree.
This first thing your Moral Imperative post made me think of is Factfulness by Rosling et al. Before reading the book in 2019 I had often heard the idea that roughly "people don't know how much progress we've made lately." I felt like I heard several people say this for a few years without actually encountering the people who were ignorant about the progress.
In the beginning of Factfulness Rosling talks about how a bunch of educated people on a UN council (or something) were ignorant of basic facts of humanity's progress in recent decades. I defer to his claim and yours that these people who are ignorant of the progress we've made exist.
That said, when I took the pre-test quiz at the beginning of the book about the progress we've made I got all of his questions right, and I was quite confident in essentially all of the answers. I recall thinking that other people I know (in the EA community, for example) would probably also get all the questions correct, despite the poor performance on the same quiz by world leaders and other audiences that Rosling spoke to over the years.
I say all this to suggest that maybe Progress Studies people are reactionary to some degree and longtermists (what you're calling "EA/XR" people) aren't? (Maybe PS people are used to seeing a lot of people in society (including some more educated and tech people) being ignorant of progress and opposed to it, while maybe EA people have experienced less of this or just don't care to be as reactionary to such people?) Could this be a crux? Longtermists just aren't very concerned that we're going to stop progressing (besides potentially crashing the car--existential risk or global catastrophic risk), whereas Progress Studies people are more likely to think that progress is slowing and coming to a stop?
For those of us who are unfamiliar with Progress Studies, I think it would help if you clarified what exactly that community thinks or advocates.
Is the idea simply to prioritize economic growth? Is it about increasing the welfare of people alive today / in the near future? Would distributing malaria bed nets count as something that a Progress Studies person might advocate qua Progress Studies advocate? Or is it about technological development specifically? (If bed nets don't count as Progress Studies, would development of CRISPR technologies to eradicate malaria count? If yes, why? (Assume the CRISPR technology has the same estimated cost-effectiveness for preventing malaria deaths as bed nets over the next century.))
Regarding your question:
This is a big and difficult question, but here are some pointers to relevant concepts and resources: