Jérémy is an AI research scientist & engineer in Toulouse, France. He holds a PhD in AI, is building career capital, and has no idea when he'll make the jump to AI policy. He's also a member of Altruisme Efficace France.
Nice initiative, thanks!
Plugging my own list of resources (last updated April 2020, next update before the end of the year).
That's much more specific, thanks. I'll answer with my usual pointers!
I'd like to answer this. I'd need some extra clarification first, because the introductions I use highly depend on the context:
(if the answer is "all of the above", I can work with than too, but it will be edited for brevity)
In practice the Pareto frontier isn’t necessarily static because background variables may be changing in time. As long as the process of moving towards the frontier is much faster than the speed at which the frontier changes though we’d continue to expect again the motion of going towards the frontier and then skating along it.
For a model of how conflicts of optimization (especially between agents pursuing distinct criteria) may evolve when the resource pie grows (i.e. when the Pareto frontier moves away from the origin), see Paretotopian Goal Alignment (Eric Drexler, 2019).
There can be a common interest for all parties to expand the Pareto frontier, since it expands the opportunities of everyone scoring better at the same time on their respective criteria.
Added my bio. It needs more work. Thanks as well for the nudge!
Furthering your "worse than Terminator" reframing in your Superintelligence section, I will quote Yudkowsky there (it's said in jest, but the message is straightforward):
Here, "AI risk is not like Terminator" attempts to dismiss the eventuality of a fair fight... and rhetorically that could be reframed as "yes, think Terminator except much more lopsided in favor of Skynet. Granted, the movies would have been shorter that way".