All of elliottthornley's Comments + Replies

Future-proof ethics

Nice post! I share your meta-ethical stance, but I don't think you should call it 'moral quasi-realism'. 'Quasi-realism' already names a position in meta-ethics, and it's different to the position you describe.

Very roughly, quasi-realism agrees with anti-realism in stating:

(1) Nothing is objectively right or wrong.

(2) Moral judgments don't express beliefs.

But, in contrast to anti-realism, quasi-realism also states:

(3) It's nevertheless legitimate to describe certain moral judgments as true.

The conjunction of (1)-(3) defines quasi-realism.

What you call 'qua... (read more)

4Holden Karnofsky2mo
Thanks, this is helpful! I wasn't aware of that usage of "moral quasi-realism." Personally, I find the question of whether principles can be described as "true" unimportant, and don't have much of a take on it. My default take is that it's convenient to sometimes use "true" in this way, so I sometimes do, while being happy to taboo [https://www.lesswrong.com/tag/rationalist-taboo#:~:text=Rationalist%20Taboo%20is%20a%20technique,present%20in%20a%20single%20word.] it anytime someone wants me to or I otherwise think it would be helpful to.
3Aaron Gertler8mo
Sounds like you should cross-post it, then! I'd recommend an excerpt + link to the full post, or sharing full text if you get Dylan's participation (I imagine he'd be happy to have his work entered in the contest for free).
Towards a Weaker Longtermism

I remember Toby Ord gave a talk at GPI where he pointed out the following:

Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B.

Diagram

9Davidmanheim9mo
That's exactly why it's important to clarify this. The position is that the entire value of the future has no more than a 50% weight in your utility function, not that each unit of future value is worth 50% as much.
The Impossibility of a Satisfactory Population Prospect Axiology

Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism!  Uniformity also  comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.

4MichaelStJules1y
Hmm, and the populationXalso comes after, rather than having them,p,kpossibly depend onX. It does look like your conditions are more "uniform" than my proposal might satisfy, i.e. you get existential quantifiers before universal quantifiers, rather than existential quantifiers all last (compare continuity vs uniform continuity [https://en.wikipedia.org/wiki/Uniform_continuity], and convergence of a sequence of functions vs uniform convergence [https://en.wikipedia.org/wiki/Uniform_convergence]). The original GNEP and NE axioms have some uniformity, too. I think informal explanations of the axioms often don't get this uniformity across, but that suggests to me that the uniformity itself is not so intuitive and compelling at all in the first place, and it's doing a lot of the work in these theorems. Especially when the conditions are uniform in the unaffected background populationX, i.e. you require the existence of an object that works for allX, that seems to strongly favour separability/additivity/the independence of unconcerned agents, which of course favours totalism. Uniformity also came up here [https://forum.effectivealtruism.org/posts/vStbBsE7xux5uDZTa/expected-value-theory-is-fanatical-but-that-s-a-good-thing?commentId=5as2hYWZCfRPShkut] , with respect to Minimal Tradeoffs.
The Impossibility of a Satisfactory Population Prospect Axiology

Ah no, that's as it should be!  is saying that  is one of the very positive welfare levels mentioned on page 4.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! Your points about independence sound right to me.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks for your comment! I think the following is a closer analogy to what I say in the paper:

Suppose apples are better than oranges, which are in turn better than bananas. And suppose your choices are:

  1. An apple and  bananas for sure.
  2. An apple with probability  and an orange with probability , along with  oranges for sure.

Then even if you believe:

  • One apple is better than any amount of oranges

It still seems as if, for some large  and small , 2 is better than 1. 2 slightly increases the risk you miss ou

... (read more)
The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! 

And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.

A case against strong longtermism

Thanks!

Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-... (read more)

3brekels1y
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn't tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms. It may be an interesting shift in focus to consider where you would be ambivalent between betting for or against the proposition that ">= 10^24 people exist in the future", since, above, you reason only abouttaking and not laying a billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here. I don't believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I'm willing to take both sides of both bets at the same odds. Correct me if I'm wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don't obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides. If you set your own set of self-consistent assumptions for reasoning about future worlds, I'm not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on ">= 10^24 people exist in the future", with our far-future progeny transferring $ based on the outcome, but I see no way of objectively resolving who is making a "better bet" at the moment
8Owen Cotton-Barratt1y
Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we're talking about in expectation). Note I feel fine about people saying of lots of activities "gee I haven't thought about that one enough, I really don't know which way it will come out", but I think it's a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.
A case against strong longtermism

Hi Vaden,

Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

Suppose, for example, that I am offered a choice: either bet on a six-... (read more)

5ben_chugg1y
Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique: I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, so let's focus on the future!". But the conclusion ignores that in practice we never have such unconditional knowledge of the future. Re: the dice example: True - there are infinitely many things that can happen while the die is in the air, but that's not the outcome space about which we're concerned. We're concerned about the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they're defined by us, because we've chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we're trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool - sometimes it's useful, sometimes it's not). We can't even write out the outcome space in this scenario, let alone define a probability measure over it. Once you buy into the idea that you must quantify all your beliefs with numbers, then yes - you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary - again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly
4MichaelStJules1y
I think the probability of these events regardless of our influence is not what matters; it's our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don't go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough. I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That's what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn't rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.