Meta:

  • I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
  • This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
  • I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
  • I think some important omissions from my summary might include:
    • Potential differences in underlying ethical views
    • More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
  • Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug. 

[See also this reply from Tony from the 'progress studies' community .]
 

Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement. 

EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:

https://lukemuehlhauser.com/industrial-revolution/
 

Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.

If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.

The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.

Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.

Comments27
Sorted by Click to highlight new comments since: Today at 5:21 AM

Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:

a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?

b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?

c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think.)

d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?

e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.

f. The tractability of reducing existential risk.

g. What is most needed: more innovation, or more theory/plans/coordination?

h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.

i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.

j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can't quickly find the strongest "you can't put probabilities" argument, but here's Anders Sandberg sub-Youtubing Deutsch)

k. Credence in moral realism.

Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.

Some general impressions:

  1. Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes philosophy and the humanities very seriously (see here and here). And David Deutsch has written a philosophical book, drawing on Karl Popper.

  2. On average, key figures in EA are more likely to have a background in academic philosophy, while PS figures are more likely to have been involved in entrepreneurship or scientific research.

  3. There seem to be some differences in disposition / sensibility / normative views around questions of risk and value. E.g. I would guess that more PS figures have ridden a motorbike, are more likely to say things like "full steam ahead".

  4. To caricature: when faced with a high stakes uncertainty, EA says "more research is needed", while PS says "quick, let's try something and see what happens". Alternatively: "more planning/co-ordination is needed" vs "more innovation is needed".

  5. PS figures seem to put less of a premium on co-ordination and consensus-building, and more of a premium on decentralisation and speed.

  6. PS figures seem (even) more troubled by the tendency of large institutions with poor feedback loops towards dysfunction.

As Peter notes, I written about the issue of x-risk within Progress Studies at length here: https://applieddivinitystudies.com/moral-progress/

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

For what it's worth, I do think there are compelling arguments, I just haven't seen them made elsewhere. For example:

  • If the US/UK research community doesn't progress rapidly in AI development, we may be overtaken by less careful actors

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

Have you pressed Tyler Cowen on this?

I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.

In a recent note, I sketched a couple of possibilities.

(1) Stagnation is riskier than growth

Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.

(2) Tyler is being Straussian

Tyler may have a different view about what messages are helpful to blast into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees, who sits in the UK House of Lords, claims that democratic politicians are hard to influence unless you first create a popular concern. My guess is Tyler may think both that politicians aren’t the centre of leverage for this issue, and that there are safer, more direct ways to influence them on this topic. In any case, it’s clear Tyler thinks that most people should focus on maximising the growth rate, and only a minority should focus on sustainability issues, including existential safety. It is not inconsistent to think that growth is too slow and that sustainability is underrated. Some listeners will hear the "sustainable" in "maximise the (sustainable) growth rate" and consider making that their focus. Most will not, and that's fine.

Many more people can participate in the project of "maximise the (sustainable) rate of economic growth" than "minimise existential risk".

(3) Something else?

I have a few other ideas, but I don't want to share the half-baked thoughts just yet.

One I'll gesture at: the phrase "cone of value", his catchphrase "all thinkers are regional thinkers", Bernard Williams, and anti-realism.

A couple relevant quotes from Tyler's interview with Dwarkesh Patel:

[If you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars.] You can get rid of that obsession with safety and replace it with an obsession with settling galaxies. But that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. But my intuition is that Pascal's Wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth.

On the 800 years claim:

In the Stanford Talk, I estimated in semi-joking but also semi-serious fashion, that we had 700 or 800 years left in us.

Thanks! I think that's a good summary of possible views.

FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven't been quite ready to express them publicly, and I don't think they're endorsed by other members of the Progress community.

Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I'm heavily paraphrasing there.

He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.

Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won't speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:

  • Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists

@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?

Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:

(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues.

(b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).

Hey sorry for the late reply, I missed this.

Yes, the upshot from that piece is "eh". I think there are some plausible XR-minded arguments in favor of economic growth, but I don't find them overly compelling.

In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it's hard to argue that it'll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.

R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.

A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.

From the PS side, I wrote up some thoughts about the difference and some things I don't quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

Everything you have matches my understanding.  For me, the key commonality between long-termist EA and Progress Studies is valuing the far future.  In economist terms, a zero discount rate.  The difference is time frame: Progress Studies is implicitly assuming shorter civilization.  If civilization is going to last for millions of years, what does it matter if we accelerate progress by a few hundred or even a few thousand years?  Much better to minimize existential risk.  Tyler Cowen outlines this well in a talk he gave at Stanford.  In his view, "probably we’ll have advanced civilization for something like another 6, 700 years...  [It] means if we got 600 years of a higher growth rate, that’s much, much better for the world, but it’s not so much value out there that we should just play it safe across all margins [to avoid existential risk.]" He is fundamentally pessimistic about our ability to mitigate existential risks.  Now I don't think most people in Progress Studies think this way, but its the only way I see to square a zero discount rate with any priority other than minimizing existential risk.

As someone who is more on the PS side than the EA side, this does not quite resonate with me.

I am still thinking this issue through and don't have  a settled view. But here are a few, scattered reactions I have to this framing.

On time horizon and discount rate:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand years”? I don't understand that framing. It's not about a constant number of years of acceleration, it's about the growth rate.
  • I am more interested in actual lives than potential / not-yet-existing ones. I don't place zero value or meaning on the potential for many happy lives in the future, but I also don't like the idea that people today should suffer for the sake of theoretical people who don't actually exist (yet). This is an unresolved philosophical paradox in my mind.
    • Note, if we could cure aging, and I and everyone else had indefinite lifespans, I might change my discount rate? Not sure, but I think I would, significantly.
    • This actually points to perhaps the biggest difference between my personal philosophy (I won't speak for all of progress studies) and Effective Altruism: I am not an altruist! (My view is more of an enlightened egoism, including a sort of selfish value placed on cooperation, relationships, and even on posterity in some sense.)

On risk:

  • I'm always wary of multiplying very small numbers by very large numbers and then trying to reason about the product. So, “this thing has a 1-e6 chance of affecting 1e15 future people and therefore should be valued at 1e9” is very suspect to me. I'm not sure if that's a fair characterization of EA/XR arguments, but some of them land on me this way.
  • Related, even if there are huge catastrophic and even existential risks ahead of us, I'm not convinced that we reduce them by slowing down. It may be that the best way to reduce them is to speed up—to get more knowledge, more technology, more infrastructure, and more general wealth.
    • David Deutsch has said this better than I can, see quotes I posted here and here.

On DTD and moral/social progress:

  • I very much agree with the general observation that material progress has raced ahead of moral/social progress, and that this is a bad and disturbing and dangerous thing. I agree that we need to accelerate moral/social progress, and that in a sense this is more urgent than accelerating material progress.
    • I also am sympathetic in principle to the idea of differential technology development.
    • BUT—I honestly don't know very clearly what either of these would consist of, in practice. I have not engaged deeply with the EA/XR literature, but I'm at least somewhat familiar with the community and its thinking now, and I still don't really know what a practical program of action would mean or what next steps would be.
  • More broadly, I think it makes sense to get smarter about how we approach safety, and I think it's a good thing that in recent decades we are seeing researchers think about safety issues before disasters happen (e.g., in genetic engineering and AI), rather than after as has been the case for most fields in the past.
    • “Let's find safe ways to continue making progress” is maybe a message and a goal that both communities can get behind.

Sorry for the unstructured dump of thoughts, hope that is interesting at least.

Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.

I hope to engage more with your other post when I find the time - for now just one point:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand years”? I don't understand that framing. It's not about a constant number of years of acceleration, it's about the growth rate.

'The growth rate' is a key parameter when assuming unbounded exponential growth, but due to physical limits exponential growth (assuming familiar growth rates) must be over after thousands if not hundreds of years. 

This also means that the significance of increasing the growth rate depends dramatically on whether we assume civilization will last for hundreds or billions of years. 

In the first case, annual growth at 3% rather than 2% could go on until we perish - and could make the difference between, e.g., 21 rather than 14 doublings over the next 500 years. That's a difference by a factor of roughly 100 - the same factor that turned the world of 1900 to what we have today, so a really big deal! (Imagine the ancient Greeks making a choice that determines whether civilization is going to end at year-1900-level or year-2020-levels of global well-being.)

But in the latter case, almost all of the future - millions, billions, trillions, or orders of magnitude longer aeons - will be characterized by subexponential growth. Compared to this, the current 'exponential era' will be extremely brief and transient - and differences in its growth rate at best determine whether it will last for another tens, hundreds, thousands, or perhaps tens of thousands of years. These differences are a rounding error on cosmic timescales, and their importance is swamped by even tiny differences in the probability of reaching that long, cosmic future (as observed, e.g., by Bostrom in Astronomical Waste).

Why? Simply because (i) there are limits in how much value (whether in an economic or moral sense) we can produce per unit of available energy, and (ii) we will eventually only be able to expand the total amount of available energy subexponentially (there can only be so much stuff in a given volume of space, and the amount of available space is proportional to the speed of light cubed - polynomial rather than exponential growth).

And once we plug the relevant numbers from physics and do the maths we find that, e.g.:

If [the current] growth rate continued for ten thousand years the total growth factor would be 10200.

There are roughly 1057 atoms in our solar system, and about 1070 atoms in our galaxy, which holds most of the mass within a million light years.  So even if we had access to all the matter within a million light years, to grow by a factor of 10200, each atom would on average have to support an economy equivalent to 10140 people at today’s standard of living, or one person with a standard of living 10140 times higher, or some mix of these.

(Robin Hanson)

And:

In 275, 345, and 400 years, [assuming current growth rates of global power demand] we demand all the sunlight hitting land and then the earth as a whole, assuming 20%, 100%, and 100% conversion efficiencies, respectively. In 1350 years, we use as much power as the sun generates. In 2450 years, we use as much as all hundred-billion stars in the Milky Way galaxy.

(Tom Murphy)

Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.

I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll find? Maybe we'll discover FTL travel (ok, unlikely). Maybe we'll at least be expanding out to other galaxies. Maybe we'll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.

If you're not willing to apply a moral discount factor against the far future, shouldn't we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we're willing to postpone the end of it by literally the length of human civilization so far, or longer?

Are we so certain about progress/growth being a brief, transient phase that we're willing to postpone the end of it by literally the length of human civilization so far, or longer?

I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different 'worldviews'. Similar to Open Phil, I'm using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values.

E.g., suppose I'm uncertain between:

  • Worldview A, according to which I should prioritize based on time scales of trillions of years.
  • Worldview B, according to which I should prioritize based on time scales of hundreds of years.
    • This could be for a number of reasons: an empirical prediction that civilization is going to end after a few hundred years; ethical commitments such as pure time preference, person-affecting views, egoism, etc.; or epistemic commitments such as high-level heuristics for how to think about long time scales or situations with significant radical uncertainty.

One way to deal with this uncertainty is to put both value on a "common scale", and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B "only" a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A.

But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the 'parliamentary approach'.

(See more generally: normative uncertainty.)

Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I'm not aware of anything sufficiently close to 'worldview B' that I find sufficiently plausible - these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more 'egoistic', agent-relative, or otherwise nonconsequentialist perspective.) 

But I do think that the most likely way that someone could convince me to, say, donate a signifcant fraction of my income to 'progress studies' or AMF or The Good Food Institute (etc.) would be by convincing me that actually I want to aggregate different 'worldviews' I find plausible in a different way. This certainly seems more likely to change my mind than an argument aiming to show that, when we take longtermism for granted, we should prioritize one of these other things.

[ETA: I forgot to add that another major consideration is that, at least on some plausible estimates and my own best guess, existential risk this century is so high - and our ability to reduce it sufficiently good - that even if I thought I should prioritize primarily based on short time scales, I might well end up prioritizing reducing x-risk anyway. See also, e.g., here.]

E.g., suppose I'm uncertain between:

  • Worldview A, according to which I should prioritize based on time scales of trillions of years.
  • Worldview B, according to which I should prioritize based on time scales of hundreds of years.

[...]

Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I'm not aware of anything sufficiently close to 'worldview B' that I find sufficiently plausible - these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more 'egoistic', agent-relative, or otherwise nonconsequentialist perspective.)

I think I have a candidate for a "worldview B" that some EAs may find compelling. (Edit: Actually, the thing I'm proposing also allocates some weight to trillions of years, but it differs from your "worldview A" in that nearer-term considerations don't get swamped!) It requires a fair bit of explaining, but IMO that's because it's generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.

Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, "What's good from a universal point of view," axiology/theory of value, irreducibly normative facts, etc.

The above notions fail at reference – they don't pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.

You seem to be unexcited about approaches to moral reasoning that are more "more 'egoistic', agent-relative, or otherwise nonconsequentialist" than the way you think moral reasoning should be done. Probably, "the way you think moral reasoning should be done" is dependent on some placeholder concepts like "axiology" or "what's impartially good" that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you'd realize that there's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?

I'll try to elaborate now why I believe "There's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative."

Basically, I see a tension between "there's an objective axiology" and "people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences." If someone claims there's an objective axiology, they're implicitly saying that anyone who doesn't adopt an optimizing mindset around successfully scoring "utility points" according to that axiology is making some kind of mistake / isn't being optimally rational. They're implicitly saying it wouldn't make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than "pursuing points according to the one true axiology." Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn't really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along  with experience-machine technology.

Once you give up on the view that there's an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations ("people differ according to how they'd ideally want to score their own lives") will jump out at you, no longer suppressed by this really narrow and fairly weird framework of "How can we subsume all of human existence into utility points and have debates on whether we should adopt 'totalism' toward the utility points, or come up with a way to justify taking a person-affecting stance."

There's a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there's no elegant way to incorporate them into the moral realist "utility points" framework. But one person's modus ponens is another's modus tollens: Maybe if your framework can't incorporate person-affecting intuitions, that means there's something wrong with the framework.

I suspect that what's counterintuitive about totalism in population ethics is less about the "total"/"everything" part of it, and more related to what's counterintuitive about "utility points" (i.e., the postulate that there's an objective, all-encompassing axiology). I'm pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we'd no longer be assuming moral realism) intuitively makes a lot of sense. 

Here's how that would work (now I'll describe the new proposal for how to do ethical reasoning):

Utility is subjective. What's good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.

A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life  under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome. 

Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks' population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.

In my worldview, I conceptualize the role of ethics as two-fold: 

(1) Inform people about the options for wisely chosen subjective life goals

--> This can include life goals inspired by a desire to do what's "most moral" / "impartial" / "altruistic," but it can also include more self-oriented life goals

(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals

Population ethics, then, is a subcategory of (1). Assuming you're looking for an altruistic life goal rather than a self-oriented one, you're faced with the question of whether your notion of "altruism" includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, 'egoistic' or agent-relative, simply because you're not answering "What's the right population ethics for everyone." You're just answering, "What's my vote for how to allocate future resources." (And you'd be trying to make your vote count in an altruistic/impartial way – but you don't have full/single authority on that.)

If moral realism is false, notions like "optimal altruism" or "What's impartially best" are under-defined. Note that under-definedness doesn't mean "anything goes" – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. "Altruism is under-defined" just means that there are multiple 'good' answers.

Finally, here's the "worldview B" I promised to introduce:

 Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: "Because I have person-affecting intuitions, I don't care about creating new people; instead, I want to focus my 'altruistic' caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don't form world-models sophisticated enough to qualify for 'having life goals'."

Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she'd care about this not because she thinks it's impartially good for the future to contain lots of happy people. Instead, she thinks it's good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.

Is that really such a weird view? I really don't think so, myself. Isn't it rather standard population-ethical discourse that's a bit weird?

Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that 'pleasure is good'. My impression is that some people think there's an objectively correct axiology because they find experiential hedonism compelling in a sort of 'conceptual' way, which I find very dubious.) 

I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:

I am actually sympathetic to an "'egoistic', agent-relative, or otherwise nonconsequentialist perspective". I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.

The point is just that, from within these other perspectives, I happen to not be that interested in "impartially maximize value over the next few hundres of years". I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these 'causes' I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it's somewhat hard to move me with arguments in that domain (it's not impossible, but it would require something that's more similar to psychotherapy or raising a child or "things the humanities do" than to doing analytic philosophy).

So this roughly means that if you wanted to convince me to do X, then you either need to be "lucky" that X is among the things I happen to like for idiosyncratic reasons - or X needs to look like a priority from an impartially consequentialist outlook.

It sounds like we both agree that when it comes to reflecting about what's important to us, there should maybe be a place for stuff like "(idiosyncratic) reactive attitudes," "psychotherapy or raising a child or 'things the humanities do'" etc. 

Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).  

My point with my long comment earlier is basically the following: 
The separation between these two modes is not clear!  

I'd argue that what you think of the "impartial mode" has some clear-cut applications, but it's under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you'd normally place in the subjectivist/particularist/existentialist mode. 

Specifically, population ethics is under-defined. (It's also under-defined how to extract "idealized human preferences" from people like my parents, who aren't particularly interested in moral philosophy or rationality.) 

I'm trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and "never violating any transitivity axioms" as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the 'cosmic commons') that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn't be done with that garden. You can look for the "impartially best way to make use of the garden" – or you could look at how other people want to use the garden and compromise with them, or look for "meta-principles" that guide who gets to use which parts of the garden (and stuff that people definitely shouldn't do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it's all made use of. Basically, I'm saying that "knowing from the very beginning exactly what the 'best garden' has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there's no universally correct solution anyway!). You're very much allowed to think of gardening in a different, more procedural  and 'particularist' way."
 

Thanks! I think I basically agree with everything you say in this comment. I'll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly 'metaethical' level (it does seem clear we land on different object-level views/preferences).

In particular, while I happen to like a particular way of cashing out the "impartial consequentialist" outlook, I (at least on my best-guess view on metaethics) don't claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.

Thanks for sharing your reaction! I actually agree with some of it: 

  • I do think it's good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I'm not 100% confident in the claims from my previous comment.
  • In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we're facing (if any).
    • But if 'moral value' is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I'm not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is.
    • It could still turn out that we're wrong about how information processing relates to physics (relatedly, look what some current longtermists were interested in during their early days ;)), or about how value relates to information processing. But this also seems very unlikely to me.

However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)

  • I think these are considerations that actually raise worries about Pascal's Mugging. The probability that we're so wrong about fundamental physics, or that I'm so wrong about what I'd value if only I knew more, seems so small that I'm not sure what to do with it.
    • There is also the issue that if we were so wrong, I would expect that we're very wrong about a number of different things as well. I think the modal scenarios on which the above "limits to growth" picture is wrong is not "how we expect the future to look like, but with FTL travel" but very weird things like "we're in a simulation". Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not "let's prioritize economic growth [or any other specific thing] instead", but more like "??? I don't know how to think about this, so I should to a first approximation ignore it".
  • Taking a step back, the place where I was coming from is: In this century, everyone might well die (or something similarly bad might happen). And it seems like there are things we can do that significantly help us survive. There are all these reasons why this might not be as significant as it seems - aliens, intelligent life re-evolving on Earth, us being in a simulation, us being super confused about what we'd value if we understood the world better, infinite ethics, etc. - but ultimately I'm going to ask myself: Am I sufficiently troubled by these possibilities to risk irrecoverable ruin? And currently I feel fairly comfortable answering this question with "no".

Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it's more likely to be "Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?". [And at least for you specifically, perhaps "impartial altruism vs. 'enlightened egoism'" might also play a role.]

Hey Jason, I share the same thoughts on pascal-mugging type arguments.

Having said that, The Precipice convincingly argues that the x-risk this century is around ~1/6, which is really not very low. Even if you don't totally believe Toby, it seems reasonable to put the odds at that order of magnitude, and it shouldn't fall into the 1-e6 type of argument.

I don't think the Deutsch quotes apply either. He writes "Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology".

That might be true when it comes to warring human civilizations, but not when it comes to global catastrophes. In the past, there was no way to say "let's not move on to the bronze age quite yet", so any individual actor who attempted to stagnate would be dominated by more aggressive competitors.

But for the first time in history, we really do have the potential for species-wide cooperation. It's difficult, but feasible. If the US and China manage to agree to a joint AI resolution, there's no third party that will suddenly sweep in and dominate with their less cautious approach.

Good points.

I haven't read Ord's book (although I read the SSC review, so I have the high-level summary). Let's assume Ord is right and we have a 1/6 chance of extinction this century.

My “1e-6” was not an extinction risk. It's a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.

So:

  • What path, or set of choices, would reduce that 1/6 risk?
  • What would be the cost of that path, vs. the path that progress studies is charting?
  • How certain are we about those two estimates? (Or even the sign of those estimates?)

My view on these questions is very far from settled, but I'm generally aligned through all of the points of the form “X seems very dangerous!” Where I get lost is when the conclusion becomes, “therefore let's not accelerate progress.” (Or is that even the conclusion? I'm still not clear. Ord's “long reflection” certainly seems like that.)

I am all for specific safety measures. Better biosecurity in labs—great. AI safety? I'm a little unclear how we can create safety mechanisms for a thing that we haven't exactly invented yet, but hey, if anyone has good ideas for how to do it, let's go for it. Maybe there is some theoretical framework around “value alignment” that we can create up front—wonderful.

I'm also in favor of generally educating scientists and engineers about the grave moral responsibility they have to watch out for these things and to take appropriate responsibility. (I tend to think that existential risk lies most in the actions, good or bad, of those who are actually on the frontier.)

But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.

I wrote up some more detailed questions on the crux here and would appreciate your input: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.

I think there's a fear of progress in specific areas (e.g. AGI and certain kinds of bio) but not a general one? At least I'm in favor of progress generally and against progress in some specific areas where we have good object-level arguments for why progress in those areas in particular could be very risky.

(I also think EA/XR folks are primarily advocating for the development of specific safety measures, and not for us to stop progress, but I agree there is at least some amount of "stop progress" in the mix.)

Re: (2), I'm somewhat sympathetic to this, but all the ways I'm sympathetic to it seem to also apply to progress studies (i.e. I'd be sympathetic to "our ability to influence the pace of progress is too low"), so I'm not sure how this becomes a crux.

That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.

The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.

(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or at least it can be if pursued intelligently. I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)

I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.

If you mean like 10x greater chance, I think that's plausible (though larger than I would say). If you mean 1000x greater chance, that doesn't seem defensible.

In both fields you basically ~can't experiment with the actual thing you care about (you can't just build a superintelligent AI and check whether it is aligned; you mostly can't run an intervention on the entire world  and check whether world GDP went up). You instead have to rely on proxies.

In some ways it is a lot easier to run proxy experiments for AI alignment -- you can train AI systems right now, and run actual proposals in code on those systems, and see what they do; this usually takes somewhere between hours and weeks. It seems a lot harder to do this for "improving GDP growth" (though perhaps there are techniques I don't know about).

I agree that PS has an advantage with historical data (though I don't see why economic theory is particularly better than AI theory), and this is a pretty major difference. Still, I don't think it goes from "good chance of making a difference" to "basically zero chance of making a difference".

The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.

Fwiw, I think AI alignment is relevant to current AI systems with which we have experience even if the catastrophic versions are in the future, and we do get chances to get it wrong and course-correct, but we can set that aside for now, since I'd probably still disagree even if I changed my mind on that. (Like, it is hard to do armchair theory without experimental data, but it's not so hard that you should conclude that you're completely doomed and there's no point in trying.)

Thanks for clarifying, the delta thing is a good point. I'm not aware of anyone really trying to estimate "what are the odds that MIRI prevents XR", though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

I absolutely agree with all the other points. This isn't an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: "People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later... the philosophical side of this seems like ineffective posturing.

Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view

That's a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.

R.e. safety for something that hasn't been invented: I'm not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it's not necessarily a feature you can just add on later. But it doesn't sound like there's a deep disagreement here, and in any case I'm not the best person to try to argue this case.

Intuitively, one analogy might be: we're building a rocket, humanity is already on it, and the AI Safety people are saying "let's add life support before the rocket takes off". The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.

To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker:

Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.

Walker: So… Get on with it?

Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So it’s not like there’s some brute let’s be a builder view and then there’s some deeper wisdom that the real philosophers pursue. It’s you be a builder or a nervous Nelly, you take your pick, I say be a builder.

Curated and popular this week
Relevant opportunities