A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent's perceptions or immediate interests). Did you find that this was true, or were there examples of this?
Another related question regards representing future generations. I feel like a democratic process encourages short-term policies for various reasons like constituent's impatience, interest-groups, reversibility of policies, etc. Did you find that this was true? Were longer timeline policies, those with their effects coming further in the future, generally neglected?
Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.
Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.
Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define "patient longtermist work" as GPR and distinct from XRR, I don't see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living at the hinge of history first (which I'd classify as GPR). Does that make sense?
I suppose one other observation is that working on s-risks typically falls within the scope of XRR and clearly also improves the quality of the future, but maybe this ignores your assumption of safely reaching technological maturity.