Tobias_Baumann

Comments

Common ground for longtermists

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too "chaotic" for more specific attempts. (However, overall I think it's unclear if / to what extent that is true.)

Common ground for longtermists

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher productivity growth, assuming that this happens, would be AI? (Rather than, say, synthetic biotechnology, or genetic engineering, or some other technological advance, or some social change resulting in more optimisation for productivity.)

While AI is perhaps the most plausible single candidate, it's still quite unclear, so I'd maybe say it's 25-30% likely that AI in particular will cause significantly higher levels of productivity growth this century.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution of AI to American productivity growth between 2010 and 2100 was at least as large as the counterfactual contribution of electricity to American productivity growth between 1900 and 1940?" I think that the economist would probably agree -- let's say, 50% < p < 75% -- but I don't have a very principled reason for thinking this and might change my mind if I thought a bit more.

Interesting. So you generally expect (well, with 50-75% probability) AI to become a significantly bigger deal, in terms of productivity growth, than it is now? I have not looked into this in detail but my understanding is that the contribution of AI to productivity growth right now is very small (and less than electricity).

If yes, what do you think causes this acceleration? It could simply be that AI is early-stage right now, akin to electricity in 1900 or earlier, and the large productivity gains arise when key innovations diffuse through society on a large scale. (However, many forms of AI are already widespread.) Or it could be that progress in AI itself accelerates, or perhaps linear progress in something like "general intelligence" translates to super-linear impact on productivity.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

Space governance is important, tractable and neglected

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

Max_Daniel's Shortform

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Representing future generations in the political process

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)

By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

Representing future generations in the political process

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.

I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.

I'm glad to see CLR take something of an interest in this topic

Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

Looking forward to reading it!

Load More