I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.
This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.
Great stuff, thanks!
thanks for the comment!
Could you expand on what you mean by the first part of that sentence, and what makes you say that?
I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.
It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.
I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)
By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.
thanks for the detailed and thoughtful comment!
I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?
Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.
I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.
I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.
I'm glad to see CLR take something of an interest in this topic
Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)
As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.
Looking forward to reading it!
Hey Jamie, thanks for the pointer! I wasn't aware of this.
Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.
I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.
Great work, thanks for sharing!
Great post - I think it's extremely important to explore many different problem areas!
Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.
Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.
Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.
It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.
If there’s more I’m missing, feel free to provide links in the comment section.
Olaf van der Veen has written a thesis on this, analysing four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.
I myself have also written about electoral reform as a possible way to reduce polarisation, and malevolent individuals in power also seem closely related to increased polarisation.
Amazing work, thanks for writing this up!