Jérémy is an AI research scientist & engineer in Toulouse, France. He holds a PhD in AI, is building career capital, and has no idea when he'll make the jump to AI policy. He's also a member of Altruisme Efficace France.


Sorted by New

Topic Contributions


AI Risk is like Terminator; Stop Saying it's Not

Furthering your "worse than Terminator" reframing in your Superintelligence section,  I will quote Yudkowsky there (it's said in jest, but the message is straightforward):

Dear journalists: Please stop using Terminator pictures in your articles about AI. The kind of AIs that smart people worry about are much scarier than that! Consider using an illustration of the Milky Way with a 20,000-light-year spherical hole eaten away.

Here, "AI risk is not like Terminator" attempts to dismiss the eventuality of a fair fight... and rhetorically that could be reframed as "yes, think Terminator except much more lopsided in favor of Skynet. Granted, the movies would have been shorter that way".

List of AI safety courses and resources

Nice initiative, thanks!

Plugging my own list of resources (last updated April 2020, next update before the end of the year).

Best resources for introducing longtermism and AI risk?

That's much more specific, thanks. I'll answer with my usual pointers!

Best resources for introducing longtermism and AI risk?

I'd like to answer this. I'd need some extra clarification first, because the introductions I use highly depend on the context:

  • 30-second pitch to spark interest, or 15-minute intro to a captive (and already curious) meetup audience?
  • In-person, by mail, by chat, by voice?
  • 1-to-1, or 1-to-many?

(if the answer is "all of the above", I can work with than too, but it will be edited for brevity)

Moloch and the Pareto optimal frontier
In practice the Pareto frontier isn’t necessarily static because background variables may be changing in time. As long as the process of moving towards the frontier is much faster than the speed at which the frontier changes though we’d continue to expect again the motion of going towards the frontier and then skating along it.

For a model of how conflicts of optimization (especially between agents pursuing distinct criteria) may evolve when the resource pie grows (i.e. when the Pareto frontier moves away from the origin), see Paretotopian Goal Alignment (Eric Drexler, 2019).

There can be a common interest for all parties to expand the Pareto frontier, since it expands the opportunities of everyone scoring better at the same time on their respective criteria.

You Should Write a Forum Bio

Added my bio. It needs more work. Thanks as well for the nudge!