At EA UC Berkeley, we’re launching an ongoing series of contests called the Artificial Intelligence Misalignment Solutions (AIMS) series. This second contest, the Distillation Contest, is now open to any student enrolled in a university/college: here are our interest and submission forms! The contest has prizes as large as $2,500 and closes on May 20th. This blog post restates the information that is on our website, with a bit more explanation of the contest's purpose.
- A huge thank you to Akash for creating the infrastructure and support that allow this project to launch!
- This competition is for distillations of posts, papers, and research agendas. For short-form arguments for the importance of AI safety, see the AI Safety Arguments Competition.
I think that it is currently difficult for university students to find tangible ways to engage with AI Safety. Generally, by creating a series of AI Safety contests, I hope to:
- Help build social capital for students who are interested in Alignment and potentially good at it.
- Create ways for people to test their fit for Alignment work.
- Create a “brand” around my contests over time so that CS students recognize its name and winners recommend the contests to their friends. Hopefully, this name recognition would also increase the ability to create partnerships with CS orgs as well.
For this specific contest, I’m inspired by the arguments that the field of AI Alignment needs more distillers to improve communication within the field, as well as to make their research accessible to a wider audience. The Distillation Contest aims to produce value by:
- Recruiting CS students who have never heard of EA or Alignment before (I will be doing this outreach at UC Berkeley through advertising, but other organizers are welcome to advertise to their own groups for recruitment).
- Increasing the engagement of students who are already interested in Alignment.
- Potentially producing useful distillations of Alignment research and increasing accessibility to said research.
The Distillation Contest asks that participants:
- 1) Pick an article/post/research paper on AI Alignment/Safety (ideally from our list below) that would benefit from being more clearly explained.
- 2) Indicate which ideas or sections of their chosen research should be distilled. Applicants can either distill a whole post/article, a specific part of the post/article, or multiple posts/articles.
- 3) Create a distillation: a clearer explanation of the research, along with a new example or new application of the research.
- 4) Optionally: If there is a problem that is trying to be solved by the research you’re distilling, you can attempt to create an additional solution to the problem and include it in your response.
What makes a good distillation?
A good distillation would explain the most confusing part of another piece of writing – the use of distillation is found in creating new ways to understand confusing concepts or confusing technical writing. These distillations would also help readers infer how the distilled ideas relate to other Alignment research. Because of this, creating a good distillation will likely require participants to read related research outside of their distilled post in order to make sure they fully understand the ideas presented in the paper.
As an example of a great distillation, Holden Karnofsky, after creating the Most Important Century Series, created a roadmap to make the series more digestible and navigable. Additionally, Scott Alexander has distilled multiple complex dialogues (and even a meme) in order to make them more accessible.
Posts/articles that we would encourage applicants to choose for the Distillation Contest to distill include the following list. Applicants are allowed to propose their own posts/articles outside of this list, although it’s possible that the judges will not believe that those articles are convoluted enough to need distillation. Therefore, it’s recommended that applicants distill from the list below. (This list may change over time.)
- Technical research papers from the Alignment Fundamentals Curriculum. Especially the optional readings
- Richard Ngo’s AGI Safety from First Principles sequence
- Evan Hubinger’s Risks from Learned Optimization sequence
- John Wentworth posts (see the first comment here):
- Late 2021 MIRI Conversations
- What Failure Looks Like
- Eliciting Latent Knowledge technical report
$2,500 - One prize available for 1st place submission.
$1,250 - One prize available for 2nd place submission.
$500 - Up to 5 prizes available.
$250 - Up to 10 prizes available.
All prize winners’ names will be posted on the EA Berkeley website and selected distillations will be optionally posted to the website.
Distillations will be scored on the following factors:
- Depth of understanding
- Clarity of presentation
- Rigor of work
- Concision/Length (longer papers will need to present more information than shorter papers)
- Originality of insight
Preference may be given to distillations that:
- Synthesize multiple sources
- Increase the ease of access for the distillation to be an introduction to a topic
There are a few other purposes to this contest that I did not list above but may write about in a future forum post! There are also likely some great articles that should be distilled in addition to the collection of the current list of recommended articles to distill (which were chosen by Akash Wasil). If you have any top recommendations for articles you'd like to be distilled, I may make additions to our existing list so that applicants have a higher chance of distilling that article.
Finally, since the contest is open to all students, please feel free to share our contest information with university students you know! Here is a link to our current advertising material for other organizers to distribute if they'd like.