Hide table of contents

This summer, I’m supervising some research fellows through Cambridge’s ERA AI Fellowship. The program started last week, and I’ve had conversations with about 6 fellows about their research projects & summer goals. 

In this post, I’ll highlight a few pieces of advice I’ve found myself regularly giving to research fellows. This post reflects my own opinions and does not necessarily reflect the views of others at ERA

Prioritize projects that have a clear target audience

Problem: One of the most common reasons why research products fail to add value is that they do not have a target audience. I think it can be easy to find a topic that is interesting/important, spend several months working on it, produce a 20-50 page paper, and then realize that you have no particular stakeholder(s) who find the work action-relevant.

Advice: Try to brainstorm what specific individuals you would want to have affected by your piece. This might be some folks in the AI safety community. This might be government officials at a relevant agency in the US or the UK. Prioritize projects that have a clear target audience and prioritize projects in which you have a way of actually getting your paper/product to that target audience. Ideally, see if you can talk to representative members of your target audience in advance to see if you have a good understanding of what they might find useful. 

Caveat #1: Gaining expertise can be a valid reason to do research. Sometimes, the most important target audience is yourself. It may be worthwhile to take on a research project because you want to develop your expertise in a certain area. Even if the end product is not action-relevant for anyone, you might have reason to believe that your expertise will be valuable in the present or future. 

Caveat #2: Consider target audiences in the future. Some pieces do not have a target audience in the present, but they could be important in the future. This is particularly relevant when considering Overton Window shifts. It’s quite plausible to me that we get at least one more major Overton Window shift in which governments become much more concerned about AI risks. There may even be critical periods (lasting only a few weeks or a few months) in which policymakers are trying to understand what to do. You probably won’t have time to come up with a good plan in those weeks or months. Therefore, it seems like it could be valuable to do the kind of research now that helps us prepare for such future scenarios. 

Be specific about your end products

Problem: A lot of junior researchers find tons of ideas exciting. You might have a junior researcher who is interested in a topic like “compute governance”, “evals”, or “open-sourcing.” That’s a good start. But if the research proposal is to “come up with gaps in the evals space” or “figure out what to do about open-source risks”, there’s a potential to spend several months thinking about high-level ideas and not actually producing anything concrete/specific It’s common for junior researchers to overestimate the feasibility of tackling big/broad research questions.

Advice: Try to be more specific about what you want your final products to look like. If it’s important for you to have a finished research product (either because it would be directly useful or because of the educational/professional benefits of having the experience of completing a project), make sure you prioritize finishing something. 

If you’re interested in lots of different projects, prioritize. For example, “I want to spend time on X, Y, and Z. X is the most important end product. I’ll try to focus on finishing X, and I’ll try not to spend much time on Y until X is finished or on track to be finished.”

Caveat #1You don’t need to aim for a legible end product. Sometimes, it’s very valuable to spend several months examining your high-level thoughts in an area. Deconfusing yourself about a topic (what’s really going on with evals? Are they actually going to help?) can be an important output. 

Caveat #2Priorities can change as you learn more about the topic. If you start on X, and then you realize it’s not actually as valuable as you thought, you should be willing to pivot to Y. The point is to make this an intentional choice– if you intentionally decide to deprioritize X, that’s great! If you blindly just pursue lots of stuff on X and Y and then a few months later you realize you haven’t actually finished X (even though you wanted to), that’s less great. 

Caveat #3: Follow your curiosity & do things that energize you. Suppose I think X is important and I want to finish X before starting Y. But one day I wake up and I’m just feeling really fired up to learn more about Y, and I want to put X aside. One strategy is to be like “no! I must work on X! I have made a commitment!” Another strategy is to be like “OK, like, even though Omega would say it’s higher EV to work on X in a world where I were a robot with no preferences, I actually just want to follow my curiosity/energy today and work on Y.” Again, the point is to just be intentional about this. 

Take advantage of your network (and others’ networks)

Problem: A lot of people who are attracted to research are introverts who love reading/thinking/writing. Those are essential parts of the process. But I think some of the most high-EV moments often come from talking to people, having your whole theory of change challenged, realizing that other people are working on similar topics, building relationships who can help your work (either on the object-level or by helping you connect to relevant stakeholders), and other things that involve talking to people. A classic failure mode is for someone to spend several months working on something only to have someone else point our a crucial consideration that could’ve significantly shaped the project earlier on. 

Advice: Early on, brainstorm a list of experts who you might want to talk to. Having short outlines to share with people can be helpful, here. When I start a new project, I often try to write up a 1-2 page outline that describes my motivation and current thinking on a topic. Then, I share it with ~10 people in my network who I think would offer good object-level feedback or connect me to others who could. I also suggest being explicit about the kind of feedback you’re looking for (e.g., high-level opinions about if the research direction is valuable, feedback on a specific argument, feedback on the writing style/quality, etc.)

If you don’t yet have a super strong network, that’s fine! If you’re in a structured research program, take advantage of the research managers and research mentors. If not, you can still probably message people like me. This can be scary, but in general, I think junior researchers err too much on the side of “not reaching out enough to the Fancy Smancy Scary People.” 

Caveat #1: But what if my doc is actually really bad and not ready to be sent to Fancy Smancy Scary Person Who Will Judge me For Being Dumb or Wasting Their Time? Yeah, that’s fair. I do empathize with the fact that this can be hard to assess, especially early on. I think my biggest piece of advice here is to start with Less Scary people, see what they think, and see if they recommend any of the Super Senior people. 

Note also that scaryness isn’t just a function of seniority– there are plenty of Super Nice Senior People and also (being honest here) some Scary/Judgey/Harsh non-senior people. Again though, I think junior people tend to err on the side of not reaching out, and I suggest reaching out to research managers if you have an idea and you’re wondering who to share it with. 

Miscellaneous

  • Shallow reviews can help you learn/prioritize. If you’re not sure what you want to focus on, consider spending the first ~2 weeks doing shallow reviews of multiple topics, identifying your favorite topic, and then spending the remaining ~6 weeks diving deeper into that topic.
  • “One of the most important products of your research is your sustained engagement on the topic. Do not think about summer projects– think about programs of research you could see yourself spending years on.” A quote a senior researcher recently shared with me that I found useful. 
  • You don’t need to produce a paper. I think the “default” assumption for a lot of people is that they need to produce a 20+ page paper that could go on Arxiv or a long EAF/LW post. Consider shorter materials. Examples include policy memos, tools that government stakeholders could use, draft legislative text, and short explainers of important topics. 
  • Remember that policymakers are unlikely to just “stumble upon” your work. In some cases, a research output is so strong or so widely shared that people might stumble upon it “in the wild.” For the most part, I think you should assume that people won’t notice your work– you have to figure out how to get it to them. Examples include “directly emailing relevant people” or “going through someone who has an existing relationship with X person.” I recently heard a little slogan along the lines of “doing the research is step one; getting people to pay attention to it is step two. Don’t skip step two.” 
Comments3


Sorted by Click to highlight new comments since:

I agree with all this advice. I also want to emphasize that I think researchers ought to spend more time talking to people relevant to their work.

Once you’ve identified your target audience, spend a bunch of time talking to them at the beginning, middle, and end of the project. At the beginning learn and take into account their constraints, at the middle refine your ideas, and at the end actually try to get your research into action.

I think it’s not crazy to spend half of your time on the research project talking.

[anonymous]3
0
0

Thank you for writing this up, Akash! I am currently exploring my aptitude as an AI governance researcher and consider the advice provided here to be valuable. Especially the point on bouncing off ideas with people early on, but also throughout the research process is something I have started to appreciate a lot more.

For anyone who is in a similar position, I can also highly recommend to check out this and this post.

For any other (junior or senior) researchers interested in expanding their pool of people to reach out to for feedback on their research projects, or simply to connect, feel free to reach out on LinkedIn or schedule a call via Calendly! I look forward to chatting.

Executive summary: Junior AI governance researchers should prioritize projects with clear target audiences, be specific about end products, and leverage their networks for feedback and connections.

Key points:

  1. Choose research projects with identifiable target audiences and ways to reach them
  2. Define concrete end products rather than broad topics to ensure completion
  3. Engage with experts and peers early for feedback and to uncover crucial considerations
  4. Consider producing shorter, more targeted outputs like policy memos or tools instead of long papers
  5. Plan for how to actively disseminate research to relevant stakeholders
  6. Balance following structured plans with pursuing energizing ideas and building long-term expertise

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o