Hide table of contents

Crossposted on Substack.

Carolina has since moved on from ML4Good, but the team agreed to publish this interview as it remains up to date.

Gergő Gaspar: Thanks again for agreeing to chat. The first thing I wanted to ask is if you could give a quick summary of what ML4Good is and what it does.

Carolina Oliveira: Sure! ML4Good is an organization that runs bootcamps to upskill people in AI safety. We started with technical bootcamps and are now also offering governance bootcamps.

It began in 2022 because the founder was experimenting to find the most effective way to get people engaged in AI safety long-term. They tried hackathons, online courses, talks, and a bootcamp. The bootcamp proved to be the most cost-effective way to get people to commit to AI safety full-time, so that’s what we focus on now.

Target Audience

Gergő: Can you talk about the target audience? Who did it focus on historically, and who is it focused on these days?

Carolina: When it started—I wasn’t there at the very beginning, but this is based on conversations and a bit of guesswork—it mainly focused on students at the university where the founder was based, probably master’s and PhD students. So, originally, it was mostly aimed at people in academia.

Gergő: Were these people completely new to AI safety?

Carolina: Some were, but others already had an interest and just needed exposure or motivation. Now we focus more on motivation: is this person actually driven to work in AI safety? If so, we provide context, information, and upskilling. In the past year or so, we’ve tried to involve not just students but also industry professionals—people with work experience, not just academics.

Gergő: Can you talk about the strategic reasons for that—like timeline concerns or skill gaps?

Carolina: Sure. AI timelines are definitely getting shorter. Our board is eager to make things happen quickly and get people ready to contribute right away, rather than having them go through multiple fellowships before getting started. Recruiting industry professionals also makes placements easier. If we want people to take on various positions, it's harder if everyone just has an academic background—especially if it’s not specifically an academic role we want to fill. So now, we try to get people from all backgrounds, not just academia.

Application Process

Gergő: I'd love to hear more about the current application process at ML4Good, including the application steps, how many applicants you get in an average round, and how many make it through each stage.

Carolina: Currently, we have a two-step selection process. In the past, it was just one step—a written application—but it has become harder to assess applications, especially with the rise of large language models. So now we have an initial application form and an interview stage. No AI is used in selection; all application are reviewed by humans.

We run regional bootcamps, and typically we get about 130–150 applications per round, depending on the region. Some regions have more professionals, others more students—the mix can affect numbers, as students sometimes can’t take time off.

For a while, the written form was enough. But with more LLM-generated content, we've added interviews. The interview is brief—about 15 minutes, with three or four questions—designed to catch things the written application may not, especially regarding genuine motivation, which LLMs can fake.

In the interview, we're most interested in candidates’ motivations, future plans, and intentions. Technical background is relevant, but plans and intentions are even more important.

Gergő: So you get, on average, 130–150 applications, go through one or two selection steps, and then…

Carolina: We select about 20 participants per bootcamp. For the interviews, we usually invite around 40 people from the application pool, then select the final 20 from those interviews.

Gergő: So, let's say someone is accepted to ML4Good. What’s it like for them? What’s the onboarding process?

Carolina: Since we're focused on bootcamps, there’s not a lot of follow-up besides occasional meetups at conferences or sending out updates. But right after being accepted, you receive a participant guide with all the general info—schedule, logistics, etc.—as well as some pre-course training you’ll need to complete. In the application, participants agree to spend 10–20 hours before the bootcamp on these preparatory tasks. We also have them commit to completing post-bootcamp surveys, which is important for impact measurement.

Gergő: Do they sign something like a DocuSign for this?

Carolina: Not formally—if we did, it'd just mean more work for us! After that, we onboard participants to Slack, where there's an announcement channel and a general chat to start conversations and connect. We also collect logistics info—things like whether they’re early birds or night owls, allergies, travel support needs (with a cap of €180 per participant), etc. Then we coordinate arrival logistics and meet at the bootcamp.

Technical vs Governance Tracks

Gergő: For the technical bootcamps, how do they compare with Arena—for people who know that program?

Carolina: Arena is extremely technical; we're more focused on general AI safety. ML4Good bootcamps used to be more technical, and our technical tracks still are: there’s a lot of coding and implementation. However, we’ve broadened the curriculum so participants get exposure to governance and other non-coding areas too. There are definitely overlaps—some parts of our curriculum are inspired by Arena and even directly adapted from it—but we've added content that helps participants make informed choices, not just technical upskilling.

Gergő: Right, so, they're not quite siblings, but more like second cousins.

Carolina: Yeah, exactly.

Gergő: Cool. Could you tell me a bit about the ML4Good governance version?

Carolina: We’re currently in the process of developing the curriculum, so I can’t say exactly what the day-to-day in the governance bootcamp will look like yet—which is a bit frustrating. What I can say is that unlike the technical track, we aren’t solely looking for technical profiles, though some background helps. The idea is to make this relevant for a wide variety of people—for example, “super-connectors,” people working at think tanks, or those with some governmental or governance experience, especially in AI governance.

Gergő: Thanks. So, it's not government in a traditional sense, but specifically AI governance, right?

Carolina: Exactly, it’s focused on non-governmental governance, or rather, AI governance specifically.

Gergő: Great, thanks for clarifying.

Carolina: Of course. Another big thing is that around 40% of the governance curriculum is based on our existing technical curriculum, so there’s still a strong foundation. Participants will get an overview of different agendas in the field, some history like the EU AI Act, and a look at how governance is being handled globally. We don’t claim to have all the answers, but there will be workshops on effective communication—whether that's with lawmakers, the general public, or others. I'm really excited about the diversity of backgrounds we’ll get and the possible outcomes, both for governance and for broader communication around it.

Gergő: Very cool. So, when is the first governance bootcamp scheduled?

Carolina:  It will happen in July in France—the first one ever. The original curriculum creator, Charbel, will be there as an instructor, so it’s shaping up to be as exciting as our technical bootcamp.

Challenges, Costs, and Team Structure

Gergő: I'm curious—what are the bottlenecks or challenges for ML4Good? Are there any hardships you can share?

Carolina: Yes, definitely! One challenge—which I’ve mentioned before since you work in marketing—is reaching people outside of our existing bubble. We need to connect with new audiences who are still highly motivated to do good and interested in this work, which is tricky. It’s a message we haven’t quite figured out how to craft, and even finding those audiences is hard. If we knew where to look, we’d already be there! So, marketing is a huge pain point, especially with the growing number of bootcamps.

Another challenge is that sometimes multiple bootcamps happen at once, which can be logistically very difficult. On a personal level (not just institutionally), I’m not the most organized person, and that sometimes causes mistakes and inefficiencies. I haven’t found the right system or support structure to fix this, and it spills over into the organization as well.

Gergő: Right, right, thanks for sharing. And what about funding—who supports ML4Good? How long does that funding last? If you can share, what’s the approximate yearly or per-bootcamp budget?

Carolina: The average budget per bootcamp is about €30,000. That’s just an average, because it depends on where the bootcamp is held. For example, in the UK, renting space is very expensive, but in Brazil or Colombia, costs are much lower.

Gergő: So, what’s the cost range? What’s the main cost driver? Is it the venue?

Carolina: The biggest cost is actually people—TA and teacher salaries, even a chef to cook vegan meals. So staff are the main expense, but venue is the second-largest.

Gergő: But venue costs can vary more depending on location, right? And staff costs as well?

Carolina: Yes, staff costs also vary if, for example, we have fewer TAs. Some bootcamps have four TAs, some have two—it depends, and that affects the budget.

Gergő: Is two TAs enough?

Carolina: Two experienced TAs can be enough, but two new TAs would struggle, especially for 20 people. It really depends on the experience level and also on the teacher’s experience.

Gergő: Got it. And you always have at least one teacher per bootcamp?

Carolina: Yes, always at least one. Sometimes we have co-teachers, especially if we’re training someone to be a future primary teacher. That’s part of our pipeline for sustainability, especially as we expand to regions like Southeast Asia, North America, and Europe. We have to think about how to make it all sustainable. So while €30,000 is the average, we've sometimes run bootcamps for as little as €22,000 for 20 participants over eight days. Of course, doing one in Canada (which we’re planning) will likely cost more, but we’ll do our best to keep costs in check.

Gergő: What’s the high end, historically?

Carolina: We had one bootcamp planned for 30 participants (though only 26 ended up coming) that cost about €40,000.

Gergő: You needed to pay that French chef!

Carolina: Exactly! I wasn’t there, but I heard the chef was amazing. Costs also go up with larger groups since you need a bigger venue and more TAs.

Gergő: And the yearly budget?

Carolina: I’m not sure if I can disclose that, but to give you an idea, we’re running 12 bootcamps this year.

Gergő: So we can’t add it all up?

Carolina: (laughs) I know what it is, but I’m not sure I can share it publicly.

Gergő: Of course, no problem. How many full-time equivalents (FTEs) are on the team (not counting contractors like TAs/teachers)?

Carolina: Everyone except Nia and me are contractors. Only the two of us are full-time staff.

Gergő: So that’s two FTEs.

Carolina: Yes, and we try to use the funding efficiently.

Gergő: That’s all my questions. Anything else you’d like to share, or things I should have asked but didn’t? Only if you want to!

Carolina: Well, as far as funding goes, we try to be very funding-efficient. We don’t sit on money—it gets used quickly—but we also work hard to maintain comfort and quality, and to see how much more we can do with the same resources. I have no complaints about availability of funds to fulfill our mission. When I joined, the organization had just transitioned to being a full-time thing. Before that, it was managed one bootcamp at a time and likely cost more per bootcamp because the workflows weren’t set up.

Gergő: That’s great—thank you so much!

Carolina: Of course!

Comments1
Sorted by Click to highlight new comments since:

Executive summary: This interview presents ML4Good as an organization running intensive AI safety bootcamps, arguing that in-person bootcamps are the most cost-effective way they have found to motivate and prepare people—especially increasingly industry professionals rather than only academics—to work full-time in AI safety, with expansion now including a new governance track.

Key points:

  1. Carolina explains that ML4Good began in 2022 after experimenting with multiple engagement formats and found bootcamps to be the most cost-effective method for long-term commitment to AI safety.
  2. The target audience has shifted from mainly master’s and PhD students toward a mix that increasingly includes industry professionals, driven by shorter AI timelines and placement considerations.
  3. ML4Good currently runs a two-step application process—written application plus a 15-minute interview—to better assess genuine motivation amid widespread LLM-generated applications.
  4. Each bootcamp typically receives 130–150 applications, interviews about 40 candidates, and selects around 20 participants.
  5. Technical bootcamps focus on general AI safety with substantial coding, drawing inspiration from Arena while adding governance exposure to help participants choose paths.
  6. ML4Good is developing a governance bootcamp, with roughly 40% shared curriculum with the technical track, aimed at people interested in AI governance rather than traditional government roles.
  7. The average cost per bootcamp is about €30,000, mainly driven by staff costs, with ML4Good running around 12 bootcamps per year supported by two full-time staff.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from gergo
Curated and popular this week
Relevant opportunities