JP

Jérémy Perret

Board member @ Altruisme Efficace France
71 karmaJoined May 2022Working (6-15 years)Seeking workLyon, France
www.jrmyp.net

Bio

Participation
5

I write and talk about AI alignment, longtermism and EA more generally, in France. I'm active in French groups, trying to help grow the local EA community. I want to spend more time creating AI alignment content in French, lowering the language barrier.

How others can help me

If you've run AI alignment reading groups, please tell me the most intriguing questions you've encountered!

Posts
1

Sorted by New

Comments
14

Writing this from the "Writing on the EA Forum" workshop at EAGxBerlin, wanted to post some... post ideas:

(a) A quick report on how the conference went, and the particular vibes I encountered, expanding on my "like an academic conference with the annoying prestige layers removed" idea.

(b) Content creation as lowering barriers to entry for a given field, especially language barriers. It seems like an obvious point but on I've never seen written anywhere?

(c) Something about the impact of sharing useful trivia and life advice. I have a recent example about hearing damage/loss prevention, what's the marginal value of these conversations?

(Thanks to Lizka for the nudge and excellent presentation)

Slightly humorous answer: it should be the very most pessimistic organization out there (I had MIRI in mind, but surely if we're picking the winner in advance we can craft an organization that goes even further on that scale).

My point is the same as jimrandomh: if there's an arms race that actually goes all the way up to AGI, safety measures are going to get in the way of speed, corners will be cut, and disaster will follow.

This assumes, of course, that any unaligned AGI system will be the cause of non-recoverable catastrophe, independently from the good intentions of their designers.

If this assumption proves wrong, then the winner of that race still holds the most powerful and versatile technological artifact ever designed; the kind of organization to wield that kind of influence should be... careful.

I'm not sure which governance design best achieves the carefulness that is needed in that case.

I cannot speak for all EA folks; here's a line of reasoning I'm patching together from the "AGI-never is unrealistic" crowd.

Most AI research isn't explicitly geared towards AGI; while there are a few groups with that stated goal (for instance, DeepMind), most of the AI community wants to solve the next least difficult problem in a thousand subdomains, not the more general AGI problem.

So while peak-performance progress may be driven by the few groups pushing for general capability, for the bulk of the field "AGI development" is just not what they do. Which means, if all the current AGI groups stop working on it tomorrow, "regular" AI research still pushes forward.

One scenario for "everyone avoids generality very hard while still solving as many problems as possible" is the Comprehensive AI Services framework. That is one pathway, not without safety concerns.

However, as Richard Ngo argues, "Open-ended agentlike AI seems like the most likely candidate for the first strongly superhuman AGI system."

To sum up:

  • regular, not-aiming-for-AGI AI research will very likely attempt to cover as many tasks as possible, as most of the field has done, and will eventually on aggregate cover a wide enough range of capability that alignment issues kick in;
  • more general agents are still likely to appear before we get there, with nothing impeding progress (for instance, while DeepMind has a safety team aware of AGI concerns, this doesn't prevent them from advancing general capability further).

A separate line of reasoning argues that no one will ever admit (in time) we're close enough to AGI that we should stop for safety reasons; so that everyone can claim "we're not working on AGI, just regular capabilities" until it's too late.

In that scenario, stopping AGI research amounts to stopping/slowing down AI research at large, which is also a thing being discussed!

Hi! Thanks for this post. What you are describing matches my understanding of Prosaic AGI, where no significant technical breakthrough is needed to get to safety-relevant capabilities.

Discussion of the implications of scaling large language models is a thing, and your input would be very welcome!

On the title of your post: the hard left turn term is left undefined, I assume that's a reference to Soares's sharp left turn.

please send a short paragraph [...] to clara@asteriskmag.com

Apparently, yes!

Furthering your "worse than Terminator" reframing in your Superintelligence section,  I will quote Yudkowsky there (it's said in jest, but the message is straightforward):

Dear journalists: Please stop using Terminator pictures in your articles about AI. The kind of AIs that smart people worry about are much scarier than that! Consider using an illustration of the Milky Way with a 20,000-light-year spherical hole eaten away.

Here, "AI risk is not like Terminator" attempts to dismiss the eventuality of a fair fight... and rhetorically that could be reframed as "yes, think Terminator except much more lopsided in favor of Skynet. Granted, the movies would have been shorter that way".

Nice initiative, thanks!

Plugging my own list of resources (last updated April 2020, next update before the end of the year).

Load more