Hide table of contents

Tl;dr: Are there any research tasks/projects in AI or biosecurity that could potentially benefit from having a large group of lightly-supervised (but preferably paid) interns work on the problem? (E.g., collecting and codifying data for a dataset, compiling and tagging literature on a topic)

--------------------

 

Some of my previous unpaid internships might be negatively described as “intern mills”: 

  • At the START Consortium, there were some ~60-70 interns working on various teams to collect and codify data for terrorism research, including the widely-cited Global Terrorism Database (GTD). The paid-supervisor to unpaid intern ratio was perhaps around 1:6?
  • At the Think Tanks and Civil Societies Project (TTCSP), there were some ~100 interns working on various teams to collect organizational data (e.g., budgets, founding dates, contact information) and scholarly literature on think tanks. There was only one (partially) paid director, but the interns were organized into a hierarchy of “executive” interns, team leads (which I was), and regular interns. None of the interns were paid.

Some people may look at this and consider the degree of unpaid labor appalling. Personally, I found the overall programs somewhat revelatory: you can point and shoot large numbers of interns at some problems and come out with useful outputs like the GTD, while the interns get to work on topic areas they might be interested in and develop skills/experience that may be useful for future job applications. TTCSP was really the standout example here: a single person could lead an organization of roughly 100 unpaid interns in producing various reports, databases, and literature compilations! (Full disclosure: the quality of the work predictably suffered from such dramatic overextension, but I feel fairly confident it was better than nothing, and with more funding it definitely could have improved)

However, I didn’t want to do work on terrorism or on think tanks generally: I’ve sought work on AI or biosecurity for the past three years, but none of the six internships I managed to get have related to those topics, and only one heavily focused on technology more broadly. Five of the six internships were unpaid.

Given my past experiences, I’ve long been wondering “why can’t some EA organization just do some kind of internship project-family on various cause area topics, including AI or biosecurity? Surely there has to be some task or project out there that a large group of just lightly-supervised, unpaid (or minimally-paid) interns could help out with, whether it’s some kind of dataset to establish base rates, creating literature/argument/epistemic maps, tracking science funding or government projects related to AI/biosecurity, etc.”

This all leads me to the question stated up front: Are there any research tasks/projects in AI or biosecurity that could potentially benefit from having a large group of lightly-supervised interns work on the problem? (E.g., collecting and codifying data for a dataset, compiling and tagging literature on a topic)

 

Note: In this question post I’m not trying to defend the merits of such a proposal; I’m mainly just trying to solicit topic ideas from people (and preview the idea), which I will then incorporate into a normal post that proposes/discusses the idea. That being said, I would love to hear any initial feedback people have on the idea, including any kinds of objections you think I should address in a larger post (if I were to proceed with this)!

New Answer
New Comment


4 Answers sorted by

I'm running Redwood Research's interpretability research.

I've considered running an "interpretability mine"--we get 50 interns, put them through a three week training course on transformers and our interpretability tools, and then put them to work on building mechanistic explanations of parts of some model like GPT-2 for the rest of their internship.

My usual joke is "GPT-2 has 12 attention heads per layer and 48 layers. If we had 50 interns and gave them each a different attention head every day, we'd have an intern-day of analysis of each attention head in 11 days."

This is bottlenecked on various things:

  • having a good operationalization of what it means to interpret an attention head, and having some way to do quality analysis of explanations produced by the interns. This could also be phrased as "having more of a paradigm for interpretability work".
  • having organizational structures that would make this work
  • building various interpretability tools to make it so that it's relatively easy to do this work if you're a smart CS/math undergrad who has done our three week course

I think there's a 30% chance that in July, we'll wish that we had 50 interns to do something like this. Unfortunately this is too low a probability for it to make sense for us to organize the internship.

Now that it's after July, did you ever end up wishing you had 50 interns to do something like this?

6
Buck
I am glad we did not have 50 interns in July. But I’m 75% that we’ll run a giant event like this with at least 25 participants by the end of January. I’ll publish something about this in maybe a month.
2
Peter Wildeford
Cool!

One potential idea I've had—which I'll admit isn't strictly related to AI or biosecurity but does seem like it could heavily scale with larger numbers of researchers—is to have interns flesh out a visualized reasoning model (perhaps similar to an "epistemic map") regarding existential risk/recovery scenarios, inspired by the work that Luisa Rodriguez did regarding this topic (see her post here on the forum and her appearance on the 80K podcast).

Such a project could basically lay out different scenarios with their input variables (e.g., "Only Q number of people survive and they are distributed in L locations", "ash/dust particles cover X% of the world for Y years and reduce light/photosynthesis by Z%"), diagram how different variables or world states may importantly interact with each other, conduct and input research on details like "what is the population size-viability likelihood curve (given concerns of genetic diversity in conditions which may feature higher rates of infant/maternal mortality)", theorize on various dynamics (e.g., how serious will security dilemmas be for groups with pre-apocalyptic firearms but without pre-apocalyptic law enforcement and military structures), etc. In addition to creating such maps (which ideally could be published and subsequently explored at a user's direction), the interns could produce reports with noteworthy "extinction vignettes" or "extinction variable-combinations" (e.g., what are some key conditions that seem likely to cause slow extinction/non-recovery in scenarios that do not feature near-immediate widespread extinction) as well as key uncertainties (e.g., climate models).

I'm curious whether other people find this idea potentially interesting or valuable?

(I briefly searched for more on this a few months ago but didn't see anything similar, and Luisa Rodriguez didn't mention any such projects when I emailed her, but if there already is something like this please let me know!)

Idea: Could take a long list of project ideas and have interns prioritise them. If listed out 200-300 bio or AI or EA meta projects and had 3 interns each do separate 1 day review pieces on each project. Could be done with minimal oversight and listing ideas could be quick and in theory it could create a useful resource.

Of course not sure how well it would match an expert take on the topic and there are lots of challenges and potential problems with unpaid intern labour.

If someone wants to organise this and has an intern army I would be happy to discuss / help.

I'll look into that, thanks!

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.