Hide table of contents

TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to international applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA.

Following a series of machine learning bootcamps earlier this year in CambridgeBerkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) this spring.

This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial.

The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML tools and techniques such as interpretability and reinforcement learning. You’ll be guided through the steps of building various deep learning models, from ResNets to transformers. You’ll come away well-versed in PyTorch and useful complementary frameworks.

From Richard Ren, an undergraduate at UPenn who participated in the January camp:

The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects.

In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections. 

Richard continues:

The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking.

Feedback from the end of the previous camp showed that:

  • Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague.
  • Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected.
  • 94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable.

In addition, first and second place in Apart Research’s January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp. 

Chris Mathwin, who was part of the runner-up project, writes of the bootcamp:

A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards!

If you’re interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below.

Schedule & logistics

The camp is free to attend and will take place in-person in Cambridge, UK, from 26 March - 8 April 2023. Accommodation is provided for participants and TAs for the duration of the camp. Travel reimbursements are available up to a limit (with more available for those travelling from outside the UK/Europe).

Weekdays will be spent programming, with a weekend break in the middle during which there will be optional organised social activities.

The following curriculum is likely to undergo some changes and restructuring, but should give you an idea of what the camp will entail:

WEEK 1 (27 - 31 March)

  • Day 1 - practise PyTorch by building a simple raytracer
  • Day 2 - as_strided, convolutions and CNNs
  • Day 3 - build your own ResNet
  • Day 4 - build your own backpropagation framework
  • Day 5 - model training, optimisers and hyperparameter search

WEEKEND (1 - 2 April)

  • Day 6 & 7 - break

WEEK 2 (3 - 7 April)

  • Day 8 - build your own GPT
  • Day 9 - transformer mechanistic interpretability day 1
  • Day 10 - transformer mechanistic interpretability day 2
  • Day 11 - RL day 1
  • Day 12 - RL day 2


All successful applicants are required to complete the prerequisites ahead of the camp. This is essential for success at the camp. We expect these to take 12-24 hours, depending on your prior experience with coding and linear algebra, and recommend you set aside at least three full days to work through the prerequisites before the start of the camp.

We ask that you only apply if you expect to be able to complete the prerequisites in your own time ahead of the camp, should you be accepted. There will be online support with the prerequisites for successful applicants ahead of the camp.

How to apply

The bootcamp is open to applicants from around the world. In addition, applicants from all career stages are welcome to apply; last time, we had participants ranging from first year undergraduates to experienced software engineers, both of whom found the camp valuable.

Evidence also suggests that less privileged individuals tend to underestimate their abilities. We are committed to fostering a culture of inclusion, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and other underrepresented groups.

The deadline for all applications (both participants and TAs) is Sunday 26 February, 23:59 GMT+0. We will endeavour to release decisions no later than 3 March, so that you have sufficient time to prepare.


Apply here

The ideal applicants we are looking for will be:

  • Confident coders (preferably in Python)
  • Comfortable with first-year undergraduate maths (especially linear algebra)
  • Willing to spend 12-24 hours working through the prerequisite content ahead of the camp

Learning machine learning can be useful to people in a variety of ways, but CaMLAB might be especially suitable for you if:

  • You’re keen to work on AI alignment, but are bottlenecked by a lack of ML experience
  • You’re keen to work on AI alignment, but are unsure how best you can contribute, and want to test whether ML engineering is for you
  • You’re interested in (but not committed to) working on AI alignment, and would like to test your fit for ML engineering
  • You’re interested in AI governance and want to benefit from a deeper understanding of the inner workings of ML

In general, we’re excited to support people who are keen to use machine learning to improve the world, especially in the realm of existential AI safety. If you have reasons other than the above for wanting to skill up in machine learning, we’d love to hear them too! You can describe all this and more in the application form.

We plan to accept 20-25 participants. Last time, we had 125 applicants for 20 places, and expect the upcoming round to be at least as competitive.

Teaching assistants

Apply here

We’re also looking for paid teaching assistants (TAs) with strong ML programming abilities to assist participants working through the curriculum.

We're especially excited about applicants who have previously completed MLAB, WMLB or the Cambridge Winter ML Camp. In any case, having worked through the curriculum yourself before TAing it is essential, and we expect those who haven’t done so previously to be able to dedicate time ahead of the camp to work through the content themselves.

TAs do not need to work the full duration of the camp, although we have a preference for those who can work at least one of the two weeks.

Some responsibilities of TAs include:

  • Being available in the office during the day to answer questions from participants (ranging from debugging code to explaining concepts)
  • Giving brief lectures in the morning teaching key ideas and tips for the day
  • Answering virtual questions on prerequisite content for 1-2 weeks before the camp (low time commitment)
  • Helping to set up GPU servers

Being a TA is also a rewarding experience, and can help you consolidate the content in your own mind and gain confidence in teaching.

Rudolf Laine, who was one of the TAs (previously referred to as mentors) at the January bootcamp in Cambridge, writes:

I was a mentor for the January 2023 cohort and had a great time. In addition to being able to effectively help many people and hopefully increase the future rate of AI alignment progress, it was an excellent way to revise content I had learned from the original MLAB, and meet and spend time with really cool people. Also if you're despairing at the thought of endlessly debugging other people's code, rest assured that debugging as a mentor is actually surprisingly fun and rewarding (especially as you have other mentors to help you), and the job also includes a lot of explaining conceptual stuff.

Contact hannah@cambridgeaisafety.org for more details. Feel free to ask questions in the comments.





More posts like this

Sorted by Click to highlight new comments since:

+1. Did the January edition of this, and wholeheartedly endorse it! I think it was the best possible use of my time (and at least a few times better than the counterfactual), and it was incredibly well run (both with respect to the organisers and the mentors).

More from hannah
Curated and popular this week
Relevant opportunities