Research analyst at Open Philanthropy. All opinions are my own.
And here’s the full list of the 57 speakers we featured on our website
That's not right: You listed these people as special guests — many of them didn't do a talk. Importantly, Hanania didn't. (According to the schedule.)
I just noticed this. And it makes me feel like "if someone rudely seeks out controversy, don't list them as a special guest" is such a big improvement over the status quo.
Here's one line of argument:
Edit: Oops, I accidentally switched to talking about "my on-reflection values" rather than "total utilitarian values". The former is ultimately what I care more about, though, so it is what I'm more interested in. But sorry for the switch.
What's the argument for why an AI future will create lots of value by total utilitarian lights?
At least for hedonistic total utilitarianism, I expect that a large majority of expected-hedonistic-value (from our current epistemic state) will be created by people who are at least partially sympathetic to hedonistic utilitarianism or other value systems that value a similar type of happiness in a scope-sensitive fashion. And I'd guess that humans are more likely to have such values than AI systems. (At least conditional on my thinking that such values are a good idea, on reflection.)
Objective-list theories of welfare seems even less likely to be endorsed by AIs. (Since they seem pretty niche to human values.)
There's certainly some values you could have that would mainly be concerned that we got any old world with a large civilization. Or that would think it morally appropriate to be happy that someone got to use the universe for what they wanted, and morally inappropriate to be too opinionated about who that should be. But I don't think that looks like utilitarianism.
I find it plausible that future humans will choose to create much fewer minds than they could. But I don't think that "selfishly desiring high material welfare" will require this. Just the milky way has enough stars for each currently alive human to get an entire solar system each. Simultaneously, intergalactic colonization is probably possible (see here) and I think the stars in our own galaxy is less than 1-in-a-billion of all reachable stars. (Most of which are also very far away, which further contributes to them not being very interesting to use for selfish purposes.)
When we're talking about levels of consumption that are greater than a solar system, and that will only take place millions of years in the future, it seems like the relevant kind of human preferences to be looking at is something like "aesthetic" preference. And so I think the relevant analogies are less that of present humans optimizing for their material welfare, but perhaps more something like "people preferring the aesthetics of a clean and untouched universe (or something else: like the aesthetics of a universe used for mostly non-sentient art) over the aesthetics of a universe which is packed with joy".
I think your point "We may seek to rationalise the former [I personally don’t want to live in a large mediocre world, for self-interested reasons] as the more noble-seeming latter [desire for high average welfare]" is the kind of thing that might influence this aesthetic choice. Where "I personally don’t want to live in a large mediocre world, for self-interested reasons" would split into (i) "it feels bad to create a very unequal world where I have lots more resources than everyone else", and (ii) "it feels bad to massively reduce the amount of resources that I personally have, to that of the average resident in a universe packed full with life".
compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates
Christiano says ~22% ("but you should treat these numbers as having 0.5 significant figures") without a time-bound; and Carlsmith says ">10%" (see bottom of abstract) by 2070. So no big difference there.
I'll hopefully soon make a follow-up post with somewhat more concrete projects that I think could be good. That might be helpful.
Are you more concerned that research won't have any important implications for anyone's actions, or that the people whose decisions ought to change as a result won't care about the research?
Similary, 'Politics is the Mind-Killer' might be the rationalist idea that has aged worst - especially for its influences on EA.
What influence are you thinking about? The position argued in the essay seems pretty measured.
Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational. [...]
I’m not saying that I think we should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party.
I think the strongest argument against EV-maximization in these cases is the Two-Envelopes Problem for Uncertainty about Brain-Size Valuation.
Note that, according to wikipedia:
Elon bought Twitter in October 2022, after the program had already been online for a while. I don't know whether any important details changed after Elon joining, nor whether twitter already had plans to expand the program. So I don't know how much credit Elon should get here vs. the previous owners of Twitter.