Hide table of contents

TL;DR:

UCLA EA ran an AI timelines retreat for community members interested in pursuing AI safety as a career. Attendees sought to form inside views on the future of AI based on an object-level analysis of current AI capabilities.

We highly recommend other university groups hold similar <15 person object-level-based retreats. We tentatively recommend other organizers hold AI timelines retreats, with caveats discussed below.

Why did we run the UCLA EA AI Timelines Retreat?

Most people in the world do not take AI risk seriously. On the other hand, some prominent members of our community believe we have virtually no chance of surviving this century due to misaligned AI. These are wild-seeming takes with massive implications. We think that assessing AI risk should be a serious and thoughtful endeavor. We sought to create space for our participants to form an inside view on the future development of AI systems based on a technical analysis of current AI systems. 

We aimed for participants to form evidence-based views on questions such as:

  • What are the concrete ways AI could harm us in the near future and beyond?
  • What are the most probable ways AGI could be developed?
  • How do recent developments in AI (such as DeepMind’s GATO) update your model of the urgency of AI risk?
  • Where can we focus our interventions to prevent the greatest risks from future AI systems?

What did the weekend look like?

The retreat began Friday evening and ended Sunday afternoon. We had 12 participants from UCLA, Harvard, UCI, and UC Berkeley.  There was a 1:3 ratio of grad students to undergrads. Participants already had an interest in AI safety and most were already planning on pursuing it as a career. 

We began with structured debates and presentations, and then spent the bulk of Saturday writing a forecasting report on the development of AI systems. Participants had the choice to work in teams or individually on these reports. Sunday was spent directing positive peer pressure to apply to AI safety opportunities or work on an AI safety-related project.

Govind Pimpale, Mario Peng Lee, Aiden Ament, and I (Lenny McCline) organized the content of the weekend. Leilani Bellamy from Canopy Retreats ran operations for the event, with help from Vishnupriya Bohra and Migle Railaite.

You can check out this google drive folder for resources and selected reports from the retreat.

What went well?

  • We were able to keep the conversation on the object level throughout the weekend, which is what we were aiming for.
  • Attendees seemed to gain what we wanted them to. Here is some well-articulated feedback from the event:
    • “I now have a proper inside view of timelines. I wasn't just very uncertain about timelines, I had no personal model, so I didn't have the ability to really analyze new evidence into updates or make uncertain predictions; I just relied on the weighted average of the views of some senior researchers. I now have a much clearer sense of what I actually predict about the future, including both a distribution for when I expect AGI, but also many predictions about what innovations I expect in what order, what I think is necessary for AGI, and the effects of pre-AGI AI.” 

Having a clear goal of writing a timeline report kept the conversations focused and productive. Although attendees could write timelines reports without having attended the retreat, having a wide range of passionate AI safety people to bounce ideas off of was reportedly quite helpful in the report writing process. All of our attendees found the weekend to be a valuable experience (9.1 avg net promoter score) and came away with a clearer picture of AI timelines based on current AI system capabilities. 

The small size of the retreat encouraged people to have deeper conversations. Since there were only 12 individuals attending, people felt more approachable. We hypothesize that having >15 attendees would have strongly diminished this effect.

What could be improved?

  • A more intellectually diverse pool of attendees
    • Most endorsed shorter timelines
    • Would’ve benefited from having ML experts with longer timelines
  • Conversations weren’t directed toward alignment research agendas
    • Agenda understanding seems more important for determining future steps
    • Timelines are generally more approachable

Having attendees with more varied opinions would’ve generated more defensible beliefs. While people presented a wide range of ideas, long timelines were underrepresented. Most attendees forecasted AGI in under 15 years (heavily influenced by the new GATO paper from DeepMind), and we would’ve liked to have other ML experts share a more skeptical viewpoint.

For improvements related to the programming details of the event, check out this  commented schedule of the event, or message Lenny on the forum for the debrief document.

What’s next for UCLA EA?

We’re happy with the outcome of this retreat and have a strong team poised to run monthly weekend retreats (we expect mostly AI safety, with occasional biosecurity topics) similar to this one starting in fall. 

Feel free to fill out this form if you’d like to pre-apply for future retreats. All retreats will be hosted in Westwood, CA, and anyone in the community may apply. Note that this is an experiment.

If you have any suggestions for future retreat topics (i.e. ELK weekend), please write them in the comments!
 

46

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 8:39 AM

"The retreat lasted from Friday evening to Sunday afternoon and had 12 participants from UCLA, Harvard, UCI, and UC Berkeley.  There was a 1:3 ratio of grad students to undergrads" 

So it was 9 undergrads and 3 grads interested in AI safety? This sounds like a biased sample. Not one postdoc, industry researcher, or PI? 

To properly evaluate timelines, I think you should have some older more experienced folks, and not just select AI safety enthusiasts, which biases your sample of people towards those with shorter timelines.  

How many participants have actually developed AI systems for a real world application? How many have developed an AI system for a non-trivial application? In my experience many people working in AI safety have very little experience with real-world AI development, and many I have seen have none whatsoever. That isn't good when it comes to gauging timelines, I think.  When you get into the weeds and learn "how the sausage is made" to create AI systems, (ie true object level understanding), I think it makes you more pessimistic on timelines for valid reasons. For one thing, you are exposed to weird unexplainable failure modes which are not published or publicized. 

Really cool! I was hoping to attend but had to be home for a family event. Would be super interested to see any participants summarize their thoughts on AI timelines, or a poll of the group's opinions. 

One quick question about your post -- you mention that some in the community think there is virtually no chance of humanity surviving AGI and cite an April Fool's Day post. (https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) I'm not sure if I'm missing some social context behind this post, but have others claimed that AGI is basically certain to cause an extinction event in a non-joking manner?

Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can't link to anything to further substantiate this. I believe Yudkowski's general policy is to not put numbers on his estimates.

Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You'll notice these are far below near-certainty. 

One of the studies listed in the database is this one in which there are a few researchers who put the chance of doom pretty high.

Thanks for the reply. I had no idea the spread was so wide (<2% to >98% in the last link you mentioned)!

I guess the nice thing about most of these estimates is they are still well above the ridiculously low orders of magnitude that might prompt a sense of 'wait, I should actually upper-bound my estimate of humanity's future QALYs in order to avoid getting mugged by Pascal.' It's a pretty firm foundation for longtermism imo.

We aimed for participants to form evidence-based views on questions such as:

[...]

  • What are the most probable ways AGI could be developed?

A smart & novel answer to this question can be an information hazard, so I'd recommend consulting with relevant people before raising it in a retreat.

Curated and popular this week
Relevant opportunities