(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreement it could plausibly have entered into is with its progenitor species – maybe that's us.
Update: I just finished this book. It was as advertised: a concise, technical and sometimes challenging experience of moral philosophy, at the edge of my non-specialist understanding, but I really appreciated it. A couple of really important takeaways for me:
Thank you for writing this, Teo, and well done again! I hope to write a longer-form summary of the ideas, both for myself and others, as I think there's a great deal of value here.
(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper: