Juliana vs. US is an ongoing lawsuit. Notably, it names "FUTURE GENERATIONS" as plaintiffs in the case.
I don't know much law, but I hear precedents are important, and so maybe EA's concerned about the long-term future should be especially interested in ensuring that this case sets a good one.
https://www.ourchildrenstrust.org/us/federal-lawsuit/
I heard about this from someone I met yesterday who studies this case. I'm going to meet with him someday soon and ask more questions. What questions should I ask?
So far I intend to follow the importance/neglectedness/tractability framework and ask questions like "What is the budget of this organization? Is there no other precedent, are they really the first case of this kind? Is it too late to change anything about their approach, or are there still decisions that need to be made?" But I think people with more legal background than me (I have zero) could suggest better questions to ask...
Also, I'm interested in hearing whether or not I've completely misjudged the expected value of looking into this. Maybe this sort of thing is actually not that important or tractable?
Thanks in advance.
There is also this - the Future Claimant's Representative - it is apparently a phrase from US bankruptcy/tort law that has been applied in an environmental and museumology context by Ian Baucom, a US academic. This is probably out of context for your question, but I'm interested in fleshing out: what would a FCR that represents interests of future generations of AIs that are more likely to enter the moral circle (i.e. when we turn off a GPT-n or make big changes to an advanced/human-level AI, are we doing something we wouldn't be happy doing to a human or other entity that is (possibly/roughly) morally equivalent) look like? I think Bostrom might have mentioned something like this in one of his digital minds papers.
If anyone has any thoughts/wants to work on this with me, please get in touch (I'm thinking of it as a video and/or paper/essay).