The AI Safety Unconference brings together persons interested in aspects of AI safety, from technical AI safety problems to issues of governance of AI. As an unconference, it aims to foster valuable social interactions between participants, through moderated discussion groups, one-on-ones, lightning talks, free-form interactions.
Date: Monday November 28 from 9:00 to 16:00, in New Orleans, alongside NeurIPS 2022.
Location: Near the Convention Center - exact location to be communicated directly with registered participants.
Fill the Application form.
The event is private, free, with a maximum of 100 participants.
Join the chat room on Matrix, to discuss online before or during the event.
Contribute facilitated discussion (30 min) or a lightning talk (10min). We also welcome other ideas of activities - feel free reach out to the organizers about that.
- 09:00-09:30 - Event opening, breakfast is served
- 09:30-12:00 - Facilitated discussions and 1:1s
- 12:00-13:30 - Lightning talks, lunch is served
- 13:30-15:30 - Facilitated discussions and 1:1s
- 15:30-16:00 - Event closing
Vegan breakfast and lunch are provided, along with all-day drinks and snacks.
The Swapcard app is used for scheduling 1:1 meetings with other participants and registering to facilitated discussions.
We have confirmed participants from the following organizations: Mila, Uni. of Stanford, Anthropic, OpenAI, UC Berkeley, Uni. of Toronto, ETH & Max Planck Institute, Uni. of Cambridge, Vector Institute, NYU, ETH Zurich, DeepMind, Oxford, MIT, and more.
Testimonial of past events (2018, 2019):
- A great way to meet the best people in the area and propel daring ideas forward. — Stuart Armstrong
- The event was a great place to meet others with shared research interests. I particularly enjoyed the small discussion groups that exposed me to new perspectives. — Adam Gleave
- Center for AI Safety's About AI Risk
- Krakovna's AI safety resources
- Alignment newsletter
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
The event is organized in partnership with the Center for AI Safety.
- Orpheus Lummis
- Mauricio H. Luduena
Thanks Nisan Stiennon for funding the event.
For any questions or feedback, reach out to email@example.com.
Have many people committed to come? What background do the organizers have with AI safety or research events?
This sounds really great in principle, and I'm tentatively interested in joining, but this looks worryingly vague and last-minute from an initial look, so I'd want to see more evidence that there'll be a critical mass of interested people there before I commit.
Personal update: The flight that I'd need in order to make the timing work is sold out, so I can't make it in any case. :(
Hi Sam. Thanks for the feedback, and sorry for the event wasn't put together earlier.
I've included more information:
I've tried joining the matrix chat room, but got the following error:
What do I need to do to join?
It should work now. I had forgotten to make it available. Thanks for joining!