The non-humans and the long-term future tag is for posts relevant to questions such as:
- To what extent (if at all) is longtermism focused only on humans?
- To what extent should longtermists focus on improving wellbeing (or other outcomes) for humans? For other animals? For artificial sentiences? For something else?
- Are existential risks just about humans?
- Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?