Comments

Podcast: Sharon Hewitt Rawlette on metaethics and utilitarianism

Thank you for this, Gus and Sharon.

This interview presented one of the most compelling cases for a hedonistic theory of value that I've heard, shifting my credence from “quite low” to “hmm, ok, maaaaybe”.

Some bits that stood out:

  1. Pluralistic conception of positive and negative experiences, i.e. experiences differ in intensity but also in character (so we can recognise fundamental differences between bodily pleasure, love, laughter, understanding, etc).

  2. Hedonism can solve the epistemic problem that haunts moral realism, by saying that we directly experience value and disvalue as a phenomenal quality.

  3. We attribute intrinsic value to non-experiential states of affairs because we recognise them as direct or indirect causes of experiential value. This is a cognitive shortcut, it works pretty well.

  4. Experience of pleasure from e.g. torture is pro tanto good, but it is not all things considered good because of the instrumental effects (i.e. lots of disvalue).

  5. Best argument against hedonistic utilitarianism is that it is too abstract. It's not actually helpful for people to think in these terms. We need nearly-absolute respect for rights, projecting intrinsic value into the world works well for us.

  6. Strong Realism vs anti-realism (as in: total mind-independence vs mind-dependence) matters: only the strong realist can deeply care about self-interested perspectival bias, e.g. can think of their deepest values as perhaps radically wrong, can worry that an AGI with idealised human values might still be an existential catastrophe.

For some reason, it hadn't occurred to me that a hedonist could do (1). It might be that I think of hedonists as aiming for a very tidy theory, and adding pluralism back in messes that up a bit (e.g. comparability and aggregation remain hard).

Anyway... "pluralistic hedonism" seems quite promising to me!

For readers: her PhD was supervised by Thomas Nagel and she thanks Parfit for input. I'm looking forward to reading it: https://www.stafforini.com/docs/Hewitt - Normative qualia and a robust moral realism.pdf

Help me find the crux between EA/XR and Progress Studies
  1. How do you give advice?

PS (Tyler Cowen): I think about what I believe, then I think about what it's useful for people to hear, and then I say that.

EA: I think about what I believe, and then I say that. I generally trust people to respond appropriately to what I say.

Help me find the crux between EA/XR and Progress Studies

So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):

  1. Some important parts of "developed world" culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. "The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try..."

PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.

EA: Agree. But, as an altruist, I tend to focus on preventing bad stuff rather than making good stuff happen (not sure why...).

  1. Broadly, "progress" comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).

PS: Agree, this gloss seems basically fine for now.

EA: Agree, but we really need to improve on this gloss.

  1. Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).

EA & PS: Seems right. Maybe we disagree on where the current margins are?

  1. Let's try not to destroy ourselves! The future could be wonderful!

EA & PS: Yeah, duh. But also eek—we recognise the dangers ahead.

  1. Markets and governments are quite functional, so that means there's much more low hanging fruit in pursuing the interests of those who these systems aren't at all built to serve (e.g. future generations, animals).

PS: Hmm, take a closer look. There are a lot of trillion dollar bills lying around, even in areas where an optimistic EMH would say that markets and government ought to do well.

EA: So I used to be really into the EMH. These days, I'm not so sure...

  1. Broadly promoting industrial literacy is really important.

PS: Yes!

EA: I haven't thought about this much. Quick thought is that I'm happy to see some people working on this. I doubt it's the best option for many of the people we speak to, but it could be a good option for some.

  1. We can make useful predictions about the effects of new technologies.

PS (David Deutsch): I might grudgingly accept an extremely weak formulation of this claim. At least on Fridays. And only if you don't try to explicitly assign probabilities.

EA: Yes.

  1. You might be missing a crucial consideration!

PS: What's that? Oh, I see. Yeah. Well... I'm all for thinking hard about things, and acting on the assumption that I'm probably wrong about mostly everything. In the end, I guess I'm crossing my fingers, and hoping we can learn by trial and error, without getting ourselves killed. Is there another option?

EA: I know. This gives me nightmares.

On Max Daniel's thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.

Progress studies vs. longtermist EA: some differences

@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?

Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:

(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues.

(b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).

Progress studies vs. longtermist EA: some differences

To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker:

Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.

Walker: So… Get on with it?

Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So it’s not like there’s some brute let’s be a builder view and then there’s some deeper wisdom that the real philosophers pursue. It’s you be a builder or a nervous Nelly, you take your pick, I say be a builder.

Progress studies vs. longtermist EA: some differences

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

Have you pressed Tyler Cowen on this?

I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.

In a recent note, I sketched a couple of possibilities.

(1) Stagnation is riskier than growth

Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.

(2) Tyler is being Straussian

Tyler may have a different view about what messages are helpful to blast into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees, who sits in the UK House of Lords, claims that democratic politicians are hard to influence unless you first create a popular concern. My guess is Tyler may think both that politicians aren’t the centre of leverage for this issue, and that there are safer, more direct ways to influence them on this topic. In any case, it’s clear Tyler thinks that most people should focus on maximising the growth rate, and only a minority should focus on sustainability issues, including existential safety. It is not inconsistent to think that growth is too slow and that sustainability is underrated. Some listeners will hear the "sustainable" in "maximise the (sustainable) growth rate" and consider making that their focus. Most will not, and that's fine.

Many more people can participate in the project of "maximise the (sustainable) rate of economic growth" than "minimise existential risk".

(3) Something else?

I have a few other ideas, but I don't want to share the half-baked thoughts just yet.

One I'll gesture at: the phrase "cone of value", his catchphrase "all thinkers are regional thinkers", Bernard Williams, and anti-realism.

A couple relevant quotes from Tyler's interview with Dwarkesh Patel:

[If you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars.] You can get rid of that obsession with safety and replace it with an obsession with settling galaxies. But that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. But my intuition is that Pascal's Wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth.

On the 800 years claim:

In the Stanford Talk, I estimated in semi-joking but also semi-serious fashion, that we had 700 or 800 years left in us.

Progress studies vs. longtermist EA: some differences

Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:

a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?

b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?

c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think.)

d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?

e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.

f. The tractability of reducing existential risk.

g. What is most needed: more innovation, or more theory/plans/coordination?

h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.

i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.

j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can't quickly find the strongest "you can't put probabilities" argument, but here's Anders Sandberg sub-Youtubing Deutsch)

k. Credence in moral realism.

Progress studies vs. longtermist EA: some differences

Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.

Some general impressions:

  1. Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes philosophy and the humanities very seriously (see here and here). And David Deutsch has written a philosophical book, drawing on Karl Popper.

  2. On average, key figures in EA are more likely to have a background in academic philosophy, while PS figures are more likely to have been involved in entrepreneurship or scientific research.

  3. There seem to be some differences in disposition / sensibility / normative views around questions of risk and value. E.g. I would guess that more PS figures have ridden a motorbike, are more likely to say things like "full steam ahead".

  4. To caricature: when faced with a high stakes uncertainty, EA says "more research is needed", while PS says "quick, let's try something and see what happens". Alternatively: "more planning/co-ordination is needed" vs "more innovation is needed".

  5. PS figures seem to put less of a premium on co-ordination and consensus-building, and more of a premium on decentralisation and speed.

  6. PS figures seem (even) more troubled by the tendency of large institutions with poor feedback loops towards dysfunction.

Some quick notes on "effective altruism"

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were put off by the name "effective altruism".
  6. While I don't like the name, the thought that it might be driving large and net positive selection effects does not seem crazy to me.
  7. I would be glad if someone gave this topic further thought, plausibly to the extent of conducting surveys and speaking to relevant experts.
Load More