All of Jérémy Perret's Comments + Replies

6
Jorgen_Ljones
1y
A Norwegian translation of this post is available here.
8
JoanMM
1y
A Spanish translation is now also available.

Writing this from the "Writing on the EA Forum" workshop at EAGxBerlin, wanted to post some... post ideas:

(a) A quick report on how the conference went, and the particular vibes I encountered, expanding on my "like an academic conference with the annoying prestige layers removed" idea.

(b) Content creation as lowering barriers to entry for a given field, especially language barriers. It seems like an obvious point but on I've never seen written anywhere?

(c) Something about the impact of sharing useful trivia and life advice. I have a recent example about hearing damage/loss prevention, what's the marginal value of these conversations?

(Thanks to Lizka for the nudge and excellent presentation)

Slightly humorous answer: it should be the very most pessimistic organization out there (I had MIRI in mind, but surely if we're picking the winner in advance we can craft an organization that goes even further on that scale).

My point is the same as jimrandomh: if there's an arms race that actually goes all the way up to AGI, safety measures are going to get in the way of speed, corners will be cut, and disaster will follow.

This assumes, of course, that any unaligned AGI system will be the cause of non-recoverable catastrophe, independently from the good i... (read more)

I cannot speak for all EA folks; here's a line of reasoning I'm patching together from the "AGI-never is unrealistic" crowd.

Most AI research isn't explicitly geared towards AGI; while there are a few groups with that stated goal (for instance, DeepMind), most of the AI community wants to solve the next least difficult problem in a thousand subdomains, not the more general AGI problem.

So while peak-performance progress may be driven by the few groups pushing for general capability, for the bulk of the field "AGI development" is just not what they do. Which ... (read more)

3
Noah Scales
2y
That's very interesting, I will follow up on those links, and the other links I have received in comments from other helpful people.  Huh, Eric Drexler is one of the authors, the same one that popularized nanotechnology back when I was a teen, I think... Thanks.

Hi! Thanks for this post. What you are describing matches my understanding of Prosaic AGI, where no significant technical breakthrough is needed to get to safety-relevant capabilities.

Discussion of the implications of scaling large language models is a thing, and your input would be very welcome!

On the title of your post: the hard left turn term is left undefined, I assume that's a reference to Soares's sharp left turn.

please send a short paragraph [...] to clara@asteriskmag.com

Apparently, yes!

Furthering your "worse than Terminator" reframing in your Superintelligence section,  I will quote Yudkowsky there (it's said in jest, but the message is straightforward):

Dear journalists: Please stop using Terminator pictures in your articles about AI. The kind of AIs that smart people worry about are much scarier than that! Consider using an illustration of the Milky Way with a 20,000-light-year spherical hole eaten away.

Here, "AI risk is not like Terminator" attempts to dismiss the eventuality of a fair fight... and rhetorically that could be refra... (read more)

Nice initiative, thanks!

Plugging my own list of resources (last updated April 2020, next update before the end of the year).

That's much more specific, thanks. I'll answer with my usual pointers!

I'd like to answer this. I'd need some extra clarification first, because the introductions I use highly depend on the context:

  • 30-second pitch to spark interest, or 15-minute intro to a captive (and already curious) meetup audience?
  • In-person, by mail, by chat, by voice?
  • 1-to-1, or 1-to-many?

(if the answer is "all of the above", I can work with than too, but it will be edited for brevity)

4
aogara
4y
Cool! Thanks for asking for clarification, I didn't quite realize how much ambiguity I left in the question. I'm mainly interested in persuading people I know personally who are already curious about EA ideas. Most of my successful intros in these situations consist of (a) an open-ended free flowing conversation, followed by (b) sending links to important reading material. Conversations are probably too personal and highly varied to advice that's universally applicable, so I'm most interested in the links and reading materials you send to people. So, my question, better specified: What links do you send to introduce AI and longtermism?
In practice the Pareto frontier isn’t necessarily static because background variables may be changing in time. As long as the process of moving towards the frontier is much faster than the speed at which the frontier changes though we’d continue to expect again the motion of going towards the frontier and then skating along it.

For a model of how conflicts of optimization (especially between agents pursuing distinct criteria) may evolve when the resource pie grows (i.e. when the Pareto frontier moves away from the origin), see Paretotopian Goa... (read more)

Added my bio. It needs more work. Thanks as well for the nudge!