Also this: https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/
This seems relevant: https://www.overcomingbias.com/2009/09/limits-to-growth.html
I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"
Is The YouTube Algorithm Radicalizing You? It’s Complicated.
Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.
I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.
But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
Yep that is what I'm saying. I think I don't agree but thanks for explaining :)
Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').
There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)
Agree with Marisa that you'd be well suited to do an AMA