Hide table of contents

I am approaching the end of my AI governance PhD, and I’ve spent about 2.5 years as a researcher at FHI. During that time, I’ve learnt a lot about the formula for successful early-career research.

This post summarises my advice for people in the first couple of years. Research is really hard, and I want people to avoid the mistakes I’ve made.

My argument: At the early stage of your research career, you should think of yourself as an entrepreneur trying to build products (i.e. research outputs) that customers (i.e. the broader community) want to consume.

You might be thinking: but I thought I was trying to maximise my impact? Sure, but at this stage in your career, you don’t know what’s impactful. You should be epistemically humble and responsive to feedback from people you respect. You should be opportunistic, willing to pivot quickly.

I am calling this the “lean startup” approach to research. By now, everyone knows that most startup ideas are bad, and that founders should embrace this: testing minimal versions of the product, getting feedback from customers, iterating, and making dramatic changes where necessary. When you’re starting out in research, it’s the same.

Early-stage researchers have two big problems. Number one, all your project ideas are bad. Number two, once you’ve written something, nobody is going to read it. It’s like an app that nobody downloads. It is possible to avoid these pitfalls, but that requires active effort. I will list many strategies that I’ve found helpful. At the end of the post, I’ll give a few examples from my own career.

A lot of this advice is stolen from people who have helped me over the years. I encourage you to try it out.

EDIT: I am most familiar with AI governance. I'm not sure how well my views generalise to other fields. (Thanks to the commenters who raised this issue.)

Problem 1: your project ideas are bad

In the early stage of your research career, 80-100% of your project ideas are bad. You’ll feel like your favourite project idea is great, but then years later, you’ll ask yourself: “what was I thinking?”

Executing an idea requires a large time investment. You don’t want to waste that time on a bad idea.

By “project idea” I mean not just a topic, but some initial framing of the problem, and some promising ideas for what kind of arguments or results you might produce. So, how do you find a good one of those?

Solutions:

  1. Ideally, someone senior tells you what to work on. But this is time-expensive for them, and they don’t want to give away their best ideas to somebody who might execute them badly. So more realistically…
  2. Write out at least 10 project ideas, and ask somebody more senior to rank the best few. Always keep this list and add to it over time. This is a tried-and-tested method and it works very well. If you are pushing just one, single project idea, you might be able to arouse some minor, polite interest from other people, but this is a much less meaningful feedback process.
  3. Notice when people are genuinely interested. Sometimes you will get a cue that a person is actually interested in a puzzle or argument that you’ve formulated. You notice that they’ve been nerd sniped. That’s a very valuable feedback signal. It is also a reason to recentre the project around the exact question that nerd sniped them. Because you don’t yet have a well-developed sense of what issues are most interesting, you should update heavily on this kind of feedback. (As you get more experienced, you can allow yourself to get nerd sniped by your own ideas.)
  4. Fit into an established paradigm. While at FHI I have gradually absorbed a sense of the implicit worldviews of senior people, and what kinds of problems they think are important and interesting. You can get some of this from reading papers. One good strategy is to extend a line of inquiry that existing research has begun. Read existing literature and think: can I add anything to this? It’s tempting to look for an idea that everyone has completely missed — something paradigm-shifting. Perhaps that would be good in theory, but most likely, you’re too inexperienced and ignorant to actually pull this off.
  5. Ask yourself: why do we care? Often, early-stage researchers will have ideas that are loosely connected to important topics, but not enough for people to actually care about the answer. A mental exercise: imagine the best-case scenario, where your research produces some strong, unexpected conclusion. Is that conclusion going to be relevant to anyone? Should it affect what key actors do, e.g. what regulation should be enacted, or what strategies AI labs should adopt? I’m not saying you need to do applied research, or have a detailed “theory of impact”. Just make sure that people are going to care about the result.
  6. Go where there’s supervision. If a respected, experienced researcher is willing to supervise particular research projects, that’s a very strong reason to do that research.
  7. Strongly consider doing empirical work. Early on, your comparative advantage probably isn’t producing ingenious, theoretical insights. (Unless you are unusually smart, and unusually knowledgeable about the existing literature and unwritten ideas.) Your comparative advantage, relative to established researchers, is that you have: more time on your hands, less other important stuff to be getting on with, and more enthusiasm for doing donkey work. Therefore, one strategy is to find a project that is important and yet requires drudgery. Collecting and analysing data is good, whether that’s quantitative or qualitative. In my case, within my PhD work, I’ve done about 40 interviews with AI researchers. That means people see me as an expert on certain topics.

Problem 2: nobody cares about your work

The biggest challenge is just getting anybody to read your work.

You think you’re solving important problems and people will value the answers. But then you finish writing, you share what you’ve made, and nobody has time to look at it. Even if it’s got a cool title and a good summary (which helps), many people will think to themselves: “Nice, I’m looking forward to reading that”, and then never get around to it. Or they’ll skim read the first two pages and then get distracted.

Solutions:

  1. Work on problems that people actually want the answers to. (See above.)
  2. Find an elegant way of describing your argument / findings. Your elevator pitch shouldn’t be: “I’ve explored this area, and I’ve come up with 11 different thoughts on the topic” (a common mistake). Ideally, it should be: here’s my neat encapsulation of a puzzle, and here’s my clean solution to that puzzle. Your high-level framing of the project will change over time, as you get more and more clarity. Publishing the work as an academic article will help with this, because your reviewers aren’t going to accept some “exploration” and accompany opinions. They’re going to want something slick.
  3. Go narrow. Early-stage researchers have a bias towards biting off more than they can chew. If you focus on a narrow problem, you might actually be able to offer an expert treatment of that problem. You don’t want to cover a broad space of ideas and be the 10,000th best-qualified person in the world to write about each one.
  4. Focus on the title, abstract, and introduction. These should be snappy, polished, and should grab the reader. The introduction is a great chance to frame your contribution.
  5. Draft and re-draft (and re-draft). The writing should go through many iterations. You make drafts, you share them with a few people, you do something else for a week. Maybe nobody has read the draft, but you come back and you’ve rejuvenated your wonderful capacity to look at the work and know why it’s terrible. Imagine a threshold of quality where, once the piece is good enough, an important and busy person considers it worth their time to read. Anything under that threshold has very little value, but cross it, and suddenly you’re in business. (Note: I’m talking about your main research projects; half-baked ideas might be fine in other contexts.)
  6. Write clearly. Your writing style should help the reader, packaging the ideas for easy consumption. Assume the reader is very easily distracted, and has almost no ability to store information in their working memory. (I’ve followed my own advice in this post, so apologies for the lack of epistemic hedging!)
  7. Publish. The final output of your project should ideally be an academic article, a blog post, a publicly-available report, or something similarly accessible. This has a bunch of benefits. Most obviously, people can actually read your work. They can share it with colleagues, talk about it, cite it, build upon it, etc. Also, your colleagues are more likely to give feedback on a piece that’s actually going somewhere. And finally, publishing the work forces you to actually make it good. Sometimes people use Google Docs for the final product, which doesn’t have these same benefits. Google Docs works well for info hazards, but I’d avoid working on info hazardous topics until you know somebody will actually read what you write. It’s fine when you have established a customer base for your research, or have some other reason to be confident that you’ll get eyeballs on your doc (e.g. the work has been commissioned).
  8. Market your work. Successful academics know how to do this. One strategy is to have multiple places where people can consume your work. There’s a journal article, a pre-print, a blog post, a Twitter thread, a podcast interview, a talk, and a YouTube link to the talk. Relatedly, there should be multiple in-bound links to your paper. This is another benefit of publishing your work in an academic venue. The publisher might do some SEO; readers might stumble across your work accidentally; if your work is respectable and therefore citable, then those citations will provide more in-bound links.
  9. Co-author. If you have a co-author, you have somebody you can bounce ideas off, somebody who can read and give comments, and somebody who can help to market the work. If the co-author has prestige, that will help for getting readers.

The “lean startup” approach in action

I have two examples where the “lean startup” approach has really benefited my work.

Example 1: Early in my PhD, somebody I respect told me that I should narrow my focus, zooming in on one of my topics (publication norms in AI research). Soon after, OpenAI published their GPT-2 blog post. I was following lots of AI researchers on Twitter and there was a big negative reaction. I was supposed to be doing something else that day, but instead I spent all day collecting and cataloging these tweets. When I was done, I showed some people, and I was surprised at how excited they were. Somebody encouraged me to make the analysis slightly more in-depth, running a few regressions, so I did. They shared the Google Doc with people at FHI and elsewhere, and it got lots of interested commenters. In retrospect, this experience really made me pivot. I started doing a case study on GPT-2 for my PhD research, and (after people still being interested) I extended this work into nearby areas. This has worked really well for me.

Example 2: My most successful paper grew out of a similar process. I had interviewed some AI researchers who were making the following argument: if GPT-2 can be misused by bad actors, then the model should be open sourced, because this actually helps people defend against misuse. For example, people can turn GPT-2 into a classifier for machine-generated text. This argument was bouncing around in my head. At the same time, Peter Cihon and I were looking into the history of responsible disclosure in computer security. I noticed that my AI researcher interviewees were plagiarising their argument from computer security researchers — computer security researchers often say that software vulnerabilities should be widely shared because this encourages software-makers to find a patch. But I felt like there were holes in the analogy: with AI misuse, how easily can the “vulnerability” be patched? I was in a meeting at GovAI and we were going around the table giving updates on what we were working on. I casually mentioned this stuff, and Jeff Ding said in a serious-seeming way: “that’s actually a really interesting and original point.” When I spoke to Allan Dafoe, he was clearly excited and enthusiastic about the idea. We co-authored a conference paper, with me iteratively refining the ideas in response to Allan’s feedback. Somewhere buried in the paper, I used the phrase “offense-defense balance of scientific knowledge” and Allan liked the phrase; so, that became the title. I wrote the paper instead of doing the other stuff I was supposed to be doing. The paper was well-received and was great for my career. Why was the paper idea so good? I could probably give a much better answer now, two years later, than I could at the time. I’m glad that, at the time, I went with the flow.

Counter-example: I am soon going to share a paper called “Structured access to AI capabilities: an emerging paradigm for safe AI deployment”. When I first floated a very underdeveloped, early version of the idea with some colleagues, it got a lukewarm response. I started working on the paper anyway. People got excited when they saw the first draft, and more so when I gave a talk at FHI. I initially took a more Steve Jobs style, “the customer doesn't know what they want until you show it to them” approach, but then took the subsequent positive feedback as a sign that I should invest further in the idea. This reflects the fact that, now that I’ve got more knowledge and experience, I can have a slightly stronger prior in favour of my ideas being decent.

Conclusion

The lean startup approach to early-career research involves immersing yourself in an intellectual community. This is an uphill struggle, because at the start, you don’t have much to add to that community. You have to find something to offer, using what little feedback you can get. (Although to some extent, you need someone to make a bet on you.) If there’s an existing group of people who care about a particular topic, that’s a great opportunity. Don’t pigeon-hole yourself as somebody who is only going to work on a particular topic. Having mentors, collaborators, and people who will read your work is extremely valuable.

[Meta: I wrote this yesterday when I was supposed to be doing something else. It started as an email to the first-year research scholars at FHI, but while writing the email, I realised it might make a good forum post. A colleague walked into my room and I told him the first two subheadings (“your project ideas are bad” and “nobody cares about your work”); his reaction made me think I should make the post less depressing. When I was writing the conclusion, the phrase “lean startup” came into my head, and I rewrote the title and introduction to fit that framing. Thanks to Allan Dafoe and Markus Anderljung for helpful comments. Thanks to Allan for general mentorship and support.]

Comments30
Sorted by Click to highlight new comments since: Today at 10:54 AM

In my view this text should come with multiple caveats.

- Beware 'typical young researcher fallacy'. Young researchers are very diverse, and while some of them will benefit from the advice, some of them will not. I do not  believe there is a general 'formula for successful early-career research'. Different people have different styles of doing research, and even different metrics for  what 'successful research' means. While certainly many people would benefit from the advice 'your ideas are bad', some young researchers actually have great ideas, should work on them, and avoid generally updating on research taste of most of the"senior researchers". 

- Beware 'generalisation out of training distribution' problems. Compared to some other fields, AI governance as studied by Allan Dafoe is relatively well decomposed into a hierarchy of problems and you can meaningfully scale it by adding junior people and telling them what to do (work on sub-problems senior people consider interesting). This seems more typical for research fields with established paradigms than for fields which are pre-paradigmatic, or fields in need of a change of paradigm. 

- Large part of the described  formula for success seems to be optimised for success in the direction getting attention of senior researchers, writing something well received, or similar. This is highly practical, and likely good for many people in fields like Ai governance; at the same time, it seems the best research outputs by early career researchers in eg AI safety do not follow this generative pattern, and seem to be motivated more by curiosity,  reasoning from first principles, and  ignoring authority opinions.

I'm not going to go into much detail here, but I disagree with all of these caveats. I think this would be a worse post if it included the first and third caveats (less sure about the second).

First caveat: I think > 95% of incoming PhD students in AI at Berkeley have bad ideas (in the way this post uses the phrase). I predict that if you did a survey of people who have finished their PhD in AI at Berkeley, over 80% of them would think their initial ideas were significantly worse than their later ideas. (Note also that AI @ Berkeley is a very selective program.)

Second caveat: I'd say that the post applies to technical AI safety, at the very least, though it's plausible it doesn't generalize further. (That would surprise me though.)

Third caveat: This doesn't seem true to me in AI safety according to my definition of "best", though idk exactly which outputs you're thinking of and why you think they're "best".

I think > 95% of incoming PhD students in AI at Berkeley have bad ideas (in the way this post uses the phrase).[...](Note also that AI @ Berkeley is a very selective program.)

What % do you think this is true for, quality-weighted? 

I remember an interview with Geoffrey Hinton where (paraphrased) Hinton was basically like "just trust your intuitions man. Either your intuitions are good or they're bad. If they are good you should mostly trust your intuitions regardless of what other people say, and if they're bad, well, you aren't going to be a good researcher anyway."

And I remember finding that logic really suspicious and his experiences selection-biased like heck (My understanding is that Hinton "got lucky" by calling neural nets early but his views aren't obviously more principled than his close contemporaries). 

But to steelman(steel-alien?) his view a little, I worry that EA is overinvested in outside-view/forecasting types (like myself?), rather than people with strong and true convictions/extremely high-quality initial research taste, which (quality-weighted) may be making up  the majority of revolutionary progress. 

And if we tell the future Geoffrey Hintons (and Eliezer Yudkowskys) of the world to be more deferential and trust their intuitions less relative to elite consensus or the literature, we're doing the world/our movement a disservice, even if the advice is likely to be individually useful/good for most researchers in terms of expected correctness of beliefs or career advancement. 

What % do you think this is true for, quality-weighted? 

Weighted by quality after graduating? Still > 50%, probably > 80%, but it's really just a lot harder to tell (I don't have enough data). I'd guess that the best people still had "bad ideas" when they were starting out.

(I think a lot of what makes an junior researcher's idea "bad" is that the researcher doesn't know about existing work, or has misinterpreted the goal of the field, or lacks intuitions gained from hands-on experience, etc. It is really hard to compensate for a lack of knowledge with good intuition or strong armchair reasoning, and I think junior researchers should make it a priority to learn this sort of stuff.)

Re: the rest of your comment, I think you're reading more into my comment than I said or meant. I do not think researchers should generally be deferential; I think they should have strong beliefs, that may in fact go against expert consensus. I just don't think this is the right attitude while you are junior. Some quotes from my FAQ:

When selecting research projects, when you’re junior you should generally defer to your advisor. As time passes you should have more conviction. I very rarely see a first year’s research intuitions beat a professor’s; I have seen this happen more often for fourth years and above.

[...] 

There’s a longstanding debate about whether one should defer to some aggregation of experts (an “outside view”), or try to understand the arguments and come to your own conclusion (an “inside view”). This debate mostly focuses on which method tends to arrive at correct conclusions. I am not taking a stance on this debate; I think it’s mostly irrelevant to the problem of doing good research. Research is typically meant to advance the frontiers of human knowledge; this is not the same goal as arriving at correct conclusions. If you want to advance human knowledge, you’re going to need a detailed inside view.

[followed by a longer example in which the correct thing to do is to ignore the expert]

Thanks for the link to your FAQ, I'm excited to read it further now!

Re: the rest of your comment, I think you're reading more into my comment than I said or meant. I do not think researchers should generally be deferential; I think they should have strong beliefs, that may in fact go against expert consensus. I just don't think this is the right attitude while you are junior

To be clear, I think Geoffrey Hinton's advice was targeted at very junior people. In context, the interview was conducted for Andrew Ng's online deep learning course, which for many people would be their first exposure to deep learning. I also got the impression that he would stand by this advice for early PhDs (though I could definitely have misunderstood him), and by "future Geoffrey Hintons and Eliezer Yudkowskys" I was thinking about pretty junior people rather than established researchers.

I'm considering three types of advice:

  1. "Always defer to experts"
  2. "Defer to experts for ~3 years, then trust your intuitions"
  3. "Always trust your intuitions"

When you said

But to steelman(steel-alien?) his view a little, I worry that EA is overinvested in outside-view/forecasting types (like myself?), rather than people with strong and true convictions/extremely high-quality initial research taste, which (quality-weighted) may be making up  the majority of revolutionary progress. 

And if we tell the future Geoffrey Hintons (and Eliezer Yudkowskys) of the world to be more deferential and trust their intuitions less relative to elite consensus or the literature, we're doing the world/our movement a disservice, even if the advice is likely to be individually useful/good for most researchers in terms of expected correctness of beliefs or career advancement. 

I thought you were claiming "maybe 3 > 1", so my response was "don't do 1 or 3, do 2".

If you're instead claiming "maybe 3 > 2", I don't really get the argument. It doesn't seem like advice #2 is obviously worse than advice #3 even for junior Eliezers and Geoffreys. (It's hard to say for those two people: in Eliezer's case, since there were no experts to defer to at the time, and I don't know enough details about Geoffrey to evaluate which advice would be good for him.)


I think Geoffrey Hinton's advice was targeted at very junior people.

Oh, I agree that's probably true. I think he's wrong to give that advice. I'm generally pretty okay with ignoring expert advice to amateurs if you have reason to believe it's bad; experts usually don't remember what it was like to be an amateur and so it's not that surprising that their advice on what amateurs should do is not great. (EDIT: Here's a new post that goes into more detail on this.)

I would guess the 'typical young researcher fallacy' also applies to Hinton  - my impression is he is  basically advising his past self, similarly to Toby. As a consequence,  the advice is likely  sensible for people-much-like-past-Hinton, but  not a good general advice for everyone.

In  ~3 years most people are able to re-train their intuitions a lot (which is part of the point!). This seems particularly dangerous in cases where expertise in the thing you are actually interested in does not exist, but expertise in something somewhat close does -  instead of following your curiosity, you 'substitute the question' with a different question, for which a PhD program exists, or senior researchers exist, or established directions exist. If your initial taste/questions was better than the expert's, you run a risk of overwriting your taste with something less interesting/impactful.

Anecdotal illustrative story:

Arguably, large part of what are now the foundations of quantum information theory / quantum computing could have been discovered much sooner, together with taking more sensible interpretations of quantum mechanics than Copenhagen interpretation seriously. My guess what was happening during multiple decades (!) was many early career researchers were curious what's going on, dissatisfied with the answers, interested in thinking about the topic more... but they were given the advice along the lines 'this is not a good topic for PhDs or even undergrads; don't trust your intuition; problems here are distasteful mix of physics and philosophy; shut up and calculate, that's how a real progress happens' ... and they followed it; acquired a taste telling them that solving difficult scattering amplitudes integrals using advanced calculus techniques is tasty, and thinking  about deep things formulated using tools of high-school algebra is for fools.   (Also if you did run a survey in year 4 of their PhDs, large fraction of quantum physicists would probably endorse the learned  update from thinking about young foolish questions about QM interpretations to the serious and publishable thinking they have learned.)



 

I agree substituting the question would be bad, and sometimes there aren't any relevant experts in which case you shouldn't defer to people. (Though even then I'd consider doing research in an unrelated area for a couple of years, and then coming back to work on the question of interest.)

I admit I don't really understand how people manage to have a "driving question" overwritten -- I can't really imagine that happening to me and I am confused about how it happens to other people.

(I think sometimes it is justified, e.g. you realize that your question was confused, and the other work you've done has deconfused it, but it does seem like often it's just that they pick up the surrounding culture and just forget about the question they cared about in the first place.)

So I guess this seems like a possible risk. I'd still bet pretty strongly against any particular junior researcher's intuition being better, so I still think this advice is good on net.

(I'm mostly not engaging with the quantum example because it sounds like a very just-so story to me and I don't know enough about the area to evaluate the just-so story.)

(As an aside, I read your FAQ and enjoyed it, so thanks for the share!)

I'm confused about your FAQ's advice here. Some quotes from the longer example:

Let’s say that Alice is an expert in AI alignment, and Bob wants to get into the field, and trusts Alice’s judgment. Bob asks Alice what she thinks is most valuable to work on, and she replies, “probably robustness of neural networks”. [...]  I think Bob should instead spend some time thinking about how a solution to robustness would mean that AI risk has been meaningfully reduced. [...] It’s possible that after all this reflection, Bob concludes that impact regularization is more valuable than robustness. [...] It’s probably not the case that progress in robustness is 50x more valuable than progress in impact regularization, and so Bob should go with [impact regularization].

In the example, Bob "wants to get into the field", so this seems like an example of how junior people shouldn't defer to experts when picking research projects.

(Specualative differences: Maybe you think there's a huge difference between Alice giving a recommendation about an area vs a specific research project? Or maybe you think that working on impact regularization is the best Bob can do if he can't find a senior researcher to supervise him, but if Alice could supervise his work on robustness he should go with robustness? If so, maybe it's worth clarifying that in the FAQ.)

Edit: TBC, I interpret Toby Shevlane as saying ~you should probably work on whatever senior people find interesting; while Jan Kulveit says that "some young researchers actually have great ideas, should work on them, and avoid generally updating on research taste of most of the 'senior researchers'". The quoted FAQ example is consistent with going against Jan's strong claim, but I'm not sure it's consistent with agreeing with Toby's initial advice, and I interpret you as agreeing with that advice when writing e.g. "Defer to experts for ~3 years, then trust your intuitions".

In that example, Alice has ~5 min of time to give feedback to Bob; in Toby's case the senior researchers are (in aggregate) spending at least multiple hours providing feedback (where "Bob spent 15 min talking to Alice and seeing what she got excited about" counts as 15 min of feedback from Alice). That's the major difference.

I guess one way you could interpret Toby's advice is to simply get a project idea from a senior person, and then go work on it yourself without feedback from that senior person -- I would disagree with that particular advice. I think it's important to have iterative / continual feedback from senior people.

Let's start with the third caveat: maybe the real crux is what we think are the best outputs;  what I consider some of the best outputs by young researchers of AI alignment is easier to point at via examples - so it's e.g. the mesa-optimizers paper or multiple LW posts by John Wentworth.  As far as I can tell, none of these seems to be following the proposed 'formula for successful early-career research'. 

My impression is PhD students in AI in Berkeley need to optimise, and actually optimise a lot for success in an established field (ML/AI), and subsequently, the advice should be more applicable to them. I would even say part of what makes a field "established" actually is something like "somewhat clear direction in the space of unknown in which people are trying to push the boundary" and "shared taste in what is good, according to the direction". (The general direction or at least the taste seems to be ~ self-perpetuating once the field is "established", sometimes beyond the point of usefulness). 

In contrast to your experience with AI students in Berkeley, in my experience about ~20% of ESPR students have generally good ideas even while at high school/first year in college, and I would often prefer these people to think about ways in which their teachers, professors or seniors are possibly confused, as opposed to learning that their ideas are now generally bad and they should seek someone senior to tell them what to work on. (Ok - the actual advice would be more complex and nuanced, something like "update on the idea  taste of people who are better/are comparable and have spent more time thinking about something, but be sceptical and picky about your selection of people"). (ESPR is also very selective, although differently.) 

With hypothetical surveys, the conclusion (young researchers should mostly defer to seniors in idea taste) does not seem to follow from estimates like "over 80% of them would think their initial ideas were significantly worse than their later ideas".  Relevant comparison is something like "over 80% of them would think they should have spent marginally more time thinking about ideas of more senior AI people at Berkeley, and more time on problems they were given by senior people, and smaller amount of time thinking about their own ideas, and working on projects based on their ideas". Would you guess the answer would still be 80%? 


 

so it's e.g. the mesa-optimizers paper or multiple LW posts by John Wentworth.  As far as I can tell, none of these seems to be following the proposed 'formula for successful early-career research'. 

I think the mesa optimizers paper fits the formula pretty well? My understanding is that the junior authors on that paper interacted a lot with researchers at MIRI (and elsewhere) while writing that paper.

I don't know John Wentworth's history. I think it's plausible that if I did, I wouldn't have thought of him as a junior researcher (even before seeing his posts). If that isn't true, I agree that's a good counterexample.

My impression is PhD students in AI in Berkeley [...]

I agree the advice is particularly suited to this audience, for the reasons you describe.

the actual advice would be more complex and nuanced, something like "update on the idea  taste of people who are better/are comparable and have spent more time thinking about something, but be sceptical and picky about your selection of people"

That sounds like the advice in this post? You've added a clause about being picky about the selection of people, which I agree with, but other than that it sounds pretty similar to what Toby is suggesting. If so I'm not sure why a caveat is needed.

Perhaps you think something like "if someone [who is better or who is comparable and has spent more time thinking about something than you] provides feedback, then you should update, but it isn't that important and you don't need to seek it out"?

Relevant comparison is something like "over 80% of them would think they should have spent marginally more time thinking about ideas of more senior AI people at Berkeley, and more time on problems they were given by senior people, and smaller amount of time thinking about their own ideas, and working on projects based on their ideas". Would you guess the answer would still be 80%? 

I agree that's more clearly targeting the right thing, but still not great, for a couple of reasons:

  • The question is getting pretty complicated, which I think makes answers a bit more random.
  • Many students are too deferential throughout their PhD, and might correctly say that they should have explored their own ideas more -- without this implying that the advice in this post is wrong.
  • Lots of people do in fact take an approach that is roughly "do stuff your advisor says, and over time become more independent and opinionated"; idk what they would say.

I do predict though that they mostly won't say things like "my ideas during my first year were good, I would have had more impact had I just followed my instincts and ignored my advisor". (I guess one exception is that if they hated the project their advisor suggested, but slogged through it anyway, then they might say that -- but I feel like that's more about motivation rather than impact.)

ts
3y2
0
0

Thanks for the caveats Jan, I think that's helpful.

It's true that my views have been formed from within the field of AI governance, and I am open to the idea that they won't fully generalise to other fields. I have inserted a line in the introduction that clarifies this.

Ideally, someone senior tells you what to work on. But this is time-expensive for them, and they don’t want to give away their best ideas to somebody who might execute them badly. So more realistically…

This seems very surprising to me. Unless by "best ideas" you mean "literally somebody's top idea" or by "someone senior" you mean Nick Bostrom? 

My impression from talking to friends working in ML is that usually faculty have ideas that they'd be excited to see their senior grad students to work on, senior grad students have research ideas that they'd love for junior grad students to implement, and so forth. 

Math and theoretical CS likewise have lists of open problems.

Similarly, in (non-academic EA) research I have way too many ideas that I can't work on myself, and I've frequently seen buffets of potential research topics/ideas that more senior researchers propose. 

My general impression is that this is the norm in EA research? When people choose not to work on other people's ideas, it's usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like "publishable", "appealing to funders", or "tractable"), not because of a lack of ideas! 

Very surprised to hear about your experiences.

My impression from talking to friends working in ML is that usually faculty have ideas that they'd be excited to see their senior grad students to work on, senior grad students have research ideas that they'd love for junior grad students to implement, and so forth. 

I think this is true if the senior person can supervise the junior person doing the implementation (which is time-expensive). I have lots of project ideas that I expect I could supervise. I have ~no project ideas where I expect I could spend an hour talking to someone, have them go off for a few months and implement it, and then I'd be interested in their results. Something will come up along the way that requires replanning, and if I'm not around to tell them how to replan, they're going to do it in a way that makes me much less excited about the results.

Thank you for the post, I found it interesting! [Minor point in response to Linch's comment.]

I generally agree with Linch's surprise, but

When people choose not to work on other people's ideas, it's usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like "publishable", "appealing to funders", or "tractable"), not because of a lack of ideas! 

I (weakly) think that another factor here is that people are trained (e.g. in their undergraduate years) to come up with original ideas and work on those, whether or not they are actually useful. This gets people into the habit of over-valuing a form of topic originality. (I.e. it's not just personal fit, arrogance, and external incentives, although those all seem like important factors.)

This is definitely the case in many of the humanities, but probably less true for those who participate in things like scientific research projects, where there are clearly useful lab roles for undergraduates to fill. In my personal experience, all my math work was assigned to me (inside and outside of class), while on the humanities side, I basically never wrote a serious essay whose topic I did not create. (This sometimes led to less-than-sensible papers, especially in areas where I felt that I lacked background and so had to find somewhat bizarre topics that I was confident were "original.") 

My guess is that changing this would be valuable, but might be very hard. Projects like Effective Thesis come to mind. 

ts
3y11
0
0

Thanks for the comments!

Speaking from my experience in AI governance: There are some opportunities to work on projects that more experienced people have suggested. At GovAI we have recently made a list of ideas people should work on. People on the GovAI fellowship program have been given suggestions.

Overall, yes, I do think there are fewer such opportunities than it sounds like there are in technical areas. That makes sense to me, because for AI governance research projects, the vast majority of junior people don't yet have the skills necessary to execute the project to a high standard.

Another potential difference is that you don't get do-overs: the more senior person can't later write a paper that follows exactly the same idea but that's written to a much higher standard, because there's more of a requirement that each paper brings original ideas. (Perhaps in technical subjects you can say e.g. "previous authors have tried to get this method to work but the results weren't great, and we show that it actually works really well".)

Therefore, I don't think the problem is that we have bad norms.  The deeper issue is that we need to find ways of accelerating the very slow process of junior researchers learning how to execute research projects to a  high standard.

Another potential difference is that you don't get do-overs: the more senior person can't later write a paper that follows exactly the same idea but that's written to a much higher standard, because there's more of a requirement that each paper brings original ideas.

Hmm taking a step back, I wonder if the crux here is that you believe(?) that the natural output for research is paper-shaped^, whereas I would guess that this would be the exception rather than the norm, especially for a field that does not have many very strong non-EA institutions/people (which I naively would guess to be true of EA-style TAI governance).

This might be a naive question, but why is it relevant/important to get papers published if you're trying to do impactful research? From the outside, it seems unlikely that all or most good research is in paper form, especially in a field like (EA) AI governance where (if I understand it correctly) the most important path to impact (other than career/skills development) is likely through improving decision quality for <10(?) actors. 

If you are instead trying to play the academia/prestige game, wouldn't it make more sense to optimize for that over direct impact? So instead of focusing on high-quality research on important topics, write the highest-quality (by academic standards) paper you can in a hot/publishable/citable topic and direction. 

^ This is a relevant distinction because originality is much more important in journal articles than other publication formats, you absolutely can write a blog post that covers the same general idea as somebody else but better, and AFAIK there's nothing stopping a think tank from "revising" a white paper covering the same general point but with much better arguments.

One reason to publish papers (specifically) about AI governance (specifically) is if you want to build an academic field working on AI governance. This is good both to get more brainpower and to get more people (who otherwise wouldn't read EA research) to take the research seriously, in the long term. C.f. the last section here https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact

Draft and re-draft (and re-draft). The writing should go through many iterations. You make drafts, you share them with a few people, you do something else for a week. Maybe nobody has read the draft, but you come back and you’ve rejuvenated your wonderful capacity to look at the work and know why it’s terrible.

Kind of related to this: giving a presentation about the ideas in your article is something that you can use as a form of a draft. If you can't get anyone to listen to a presentation, or don't want to give one quite yet, you can pick some people whose opinion you value and just make a presentation where you imagine that they're in the audience.

I find that if I'm thinking of how to present the ideas in a paper to an in-person audience, it makes me think about questions like "what would be a concrete example of this idea that I could start the presentation with, that would grab the audience's attention right away". And then if I come up with a good way of presenting the ideas in my article, I can rewrite the article to use that same presentation.

(Unfortunately myself I have mostly taken this advice in its reverse form. I've first written a paper and then given a presentation of it afterwards, at which point I've realized that this is actually what I should have said in the paper itself.)

This is great advice, thanks for writing this!

Several people had also recommended the book The Lean PhD, which I haven't yet read but it has some obvious parallels with this post :) 

Another somewhat related book recommendation (which I liked and would recommend to a few people early in their career,  in research and elsewhere): Reid Hoffman's The Start-Up of You.

ts
3y1
0
0

Great, thanks! I'll check it out :)

Write out at least 10 project ideasand ask somebody more senior to rank the best few

For bonus points, try to understand how they did the ranking. That way, you can start building up a model of how senior researchers think about evaluating project ideas, and refining your own research taste explicitly.

I am sitting in a virtual lecture with Cleve Moler, inventor of MATLAB. He just told us that he produced a 16mm celluloid film to promote the singular value decomposition in 1976. A clip from the film he produced made it into Star Trek, the Motion Picture, in 1979. It's on a screen behind Spock. Point of evidence in favor of the idea that promoting ideas matters in academia. 

I asked Cleve about what made him decide that the singular value decomposition, and later MATLAB, were topics worth focusing on. What sources of information did he look to? Was he trying to discern what other people were interested in?

What I took in from his response was that he never picked topics based on the scale of the potential application. For example, he didn't decide to study the mathematics underpinning computer graphics because of the applied importance of computer graphics. He just has a relentless interest in the underlying mathematics, and wants to understand it. What can we learn about the quaternion, a 4x4 matrix that's the workhorse of computer graphics? This understanding of these topics developed bit by bit, through small-scale interactions with other people.

We should treat this sort of account with skepticism, both because it's a subjective assessment of his own history, and because it's a single and unrepresentative example of the outcomes of academic mathematical research. Cleve might have simply lucked into a billion-dollar topic. The fact that we're all asking him about his background is the result of selecting for outcomes, not necessarily for an unusually effective process.

But I think what he was saying was that to find ideas that are likely to nerd snipe somebody else, it's important to use your judgment and try to identify components of a field in an academic sense that are clearly important, and try to understand them better. Having a sense of judgment for the importance of components of a system seems like an important underlying skill for the "lean startup" approach you're describing here.

Thanks for writing this, I found it helpful and really clearly written!

One reaction: if you're testing research as a career (rather than having committed and now aiming to maximise your chances of success), your goal isn't exactly to succeed as an early stage researcher. It might be that trying your best to succeed is approximately the best way to test your fit - but it seems like there are a few differences:

  • "Going where there's supervision" might be especially important, since a supervisor who comes to know you very well is a big and reliable source of information about your fit for research - which seems esp. important given that feedback in the form of "how much other people like your ideas" is often biased (e.g. because most of your early ideas are bad) or noisy (e.g. because some factors that influence the success of your research aren't under your control).
  • It might be important to test your fit for different fields or flavours (e.g. quantitative vs qualitative, empircal vs theoretical) of research. This can come apart from the goal of trying to succeed as an early-stage researcher - since moving into unfamiliar territory might mean your outputs are less good in the short term.
  • Relatedly, it might be important to select at least some of your projects based on the skills or knowledge gaps they help you fill. Again, this goal might come apart from short term success (e.g. you pick a forecasting project to improve those skills, despite not expecting it to generate interesting findings)
  • Probably you want to spend less energy marketing your work, except to the extent that it's helpful in getting more people to give you feedback on your fit for a research career.
  • [most uncertain] "Someone senior tells you what to work on" might actually not be the ideal solution to your problem 1. If the skills of research execution and research planning are importantly different, then you might fail to get enough info about your competence/enjoyment/fit for research planning skills (but I'm pretty uncertain if they are importantly different).

I'd be curious how much you agree with any of these points :)

The 'lean startup' approach reminds me of Jacob Steinhardt's post about his approach to research, of which the key takeaways are:

  • When working on a research project, you should basically either be in "de-risking mode" (determining if the project is promising as quickly as possible) OR "execution mode" (assuming the project is promising and trying to do it quickly). This probably looks like trying to do an MVP version of the project quickly, and then iterating on that if it's promising.
  • If a project doesn't work out, ask why. That way you:
    • avoid trying similar things that will fail for the same reasons.
    • will find out whether it didn't work because your implementation was broken, or the high-level approach you were taking isn't promising.
  • Try hard, early, to try to show that your project won't solve the problem.

Thank you for sharing. I agree with adding a somewhat commercial dimension to research (possibly not all research). It can inspire a better balanced incentive structure, accelerate the process and possibly attract private funding (without corroding one’s research integrity, process and outcome). I have only regained interest in STEM (enthusiast)  this year  and seem to come across recurring issues with the process and dearth of funding. Ones that I feel pertinent difficulty in funding research (in general and in such a capital abundant period) outside the generally expected area of a field, some corrosive politics and the desire to succeed in each research like it is your last (but for the wrong reasons). 
I think we can and should do better. I am working on something. 

But more immediately: 

  1. New Science (https://newscience.org/about) just got their 501c3 research nonprofit incorporation status (they are starting with life sciences), I really like idea and wish them all the success.
  2. Also, IMHO research should be published in earlier stages to capture network effects and make it a more frequently iterative process. Perhaps they have different mission, but I think Octopus (https://science-octopus.org/about) is a thoughtful approach
  3. I really appreciate the list of ideas GovAI put together for further research, I suspect it was not meant to be final and encourage you to update as and when you can. I think this necessary for all fields, very helpful in understanding better the lay of the land.
  4. I think there is definitely a critical mass that will be interested in your research (vs. Your Problem 2). Best of luck.
More from ts
Curated and popular this week
Relevant opportunities