Thanks to Michael Aird for feedback on the ideas in the post, and to Nora Ammann and Damon Binder for feedback on the post itself.
At the end of last year, I decided that I wanted to pursue some research interests of mine, and test my fit for research in the EA space more generally. I ended up on a 3 month, part time contract as a researcher, working on various aspects of the history of social movements with a mentor at a different organisation.
Looking back over these 3 months, I think I learned some pretty useful things from trying to test my fit for research. In this post, I try to share these learnings, primarily with an audience of people who are interested in doing research, but don’t have much experience yet.
- My research background: prior to this three month project, I had done a masters in literary history and some policy research, but no directly truth-seeking work on EA relevant questions. I had also worked in research management for a few years at FHI, so I had a fair bit of context on doing EA research. No quant or technical background.
- The research I did: there are lots of different sorts of research, and these learnings won’t port across to all of them. Types of research I did:
- Background reading (e.g. reading lots of wikipedia pages on different social movements, reading introductory literature from the field of social movement studies)
- A bit of literature review work (e.g. what have EAs already written on the history of social movements and how good is it)
- High-level thinking about the space of questions that matter (e.g. a list of research ideas on the history of social movements which I hope to post soon)
- Some research analyst style, back of the envelope answers to quantitative-ish questions (e.g. this post on the rate of terrorist attacks for different groups)
- Notes on my own credibility:
- Research: from this test, I think I have the potential to be an average but not great researcher, within the domain I’m interested and competent in. If you want advice on how to do great research, ask a great researcher.
- Testing fit for research: I think I did a good job at testing my fit for research, though there were things that could have been better.
- I only drew a tentative and quite mild conclusion from the test (average but not great). However, I think it’s hard to get very conclusive results from this sort of test (loads of confounders), and so for a 3 month test this is par for the course.
- I think I learned a lot from the test. You can assess this directly from this post.
- I also made some mistakes in the way I set up the test. In particular, I think I should have tried harder to get someone to mentor me on a question they cared about for their own research, and that I should have set my goals more clearly.
- Reflection: I think I’m above average at reflection. You can assess this from this post, and the post here.
The below is basically a series of short reflective essays. Rather than offering abstract models, I’m trying to share the texture of my experience, in the hopes that this will make the things I’m writing about easier to really ‘get’. Unfortunately this writing style is also quite wordy. For a quicker read, skim the headings, and then pick ones that sound relevant to you to read in full.
Forming opinions is really useful
It’s (relatively) easy to ask interesting questions and speculate on their answers, but harder to actually come out and say ‘currently x is my best guess’, because then you can be wrong.
Testing my fit for research forced me to form opinions in various ways:
- To try to answer a research question, I needed to form some kind of opinion.
- I knew my mentor would ask me what my opinion was, and wanted to be able to answer.
- I’d read different people arguing against one another, know that they couldn’t both be right, and want to figure out what was going on. This meant forming my own opinion about the underlying matter at hand.
I found practising forming opinions a bit scary, but also very good for my epistemics and my confidence:
- Forming an opinion makes it way easier for me and others to notice that I’m confused or wrong, and improve my thinking.
- I can form opinions! Pre EA I naturally formed opinions, but since getting involved I haven’t really. Practising reminded me that I have a brain and can use it to think things for myself.
There are many reasons why I didn’t previously feel empowered to form opinions on EA stuff, and I think it’s worth listing them as I expect other people share some of them:
- ‘I’m supposed to be epistemically humble and defer to people who are cleverer than me/know more about this stuff.’
- I think it’s genuinely very important to practise intellectual humility and often deference is the right move. But thinking for yourself also seems important. I haven’t worked out a model for how to balance these things, and think it’s an interesting problem.
- ‘I work in ops not research, so I can’t have opinions.’
- I don’t endorse this belief, but think it’s common.
- ‘I don’t have a technical background, so I can’t have opinions.’
- As stated, I also don’t endorse this belief.
- I also want to point out that both the ops/research and the non/technical distinctions correlate with not/male. I weakly expect that not forming opinions is a bigger problem for women than men in EA, and this seems bad.
To be clear, I’m not claiming that testing your fit for research is the only, or best, way of practising forming opinions. It’s just the way that I started doing so, and one of the possible benefits to be had from testing fit for research.
Thinking with numbers is really useful
This is a very common position in the EA community, so I expect many people don’t need me to tell them this. But even though I had heard this many times, I only properly understood how quantitative thinking was useful by actually doing it.
Before this research project, I didn’t feel excited about thinking with numbers. There were a few different things going on here:
- ‘I don’t have quantitative skills, so I can’t think quantitatively.’
- ‘To think quantitatively I’ll have to do a stats course, and I don’t want to and don’t have time.’
- ‘I like poetry and language, not numbers.’
- ‘If I start thinking quantitatively, I’ll have to think about physics and economics and stuff, but I want to think about history and culture.’
Because of thoughts like this, I wouldn’t have proactively sought out opportunities to think with numbers. Fortunately for me, my mentor ended up giving me a project to do which required some numbers. To my surprise, I found that:
I was capable of usefully thinking with numbers, just using basic maths.
It didn’t instantly destroy my identity as a person who likes poetry and language.
I could use numbers to think about history and culture (in some cases at least).
It was actually fun.
It changed the way I thought, for the better.
To expand upon the last point, here are the things I found most useful about thinking with numbers:
- It forces you to form an opinion. In turn, this makes it easier to change your mind.
- Putting numbers to things can help to show which things matter a lot, and which things are basically rounding errors.
- Trying to model something out in numbers helps show where you are confused.
- [There’s a toy example in this footnote for people who want more of a sense of how I found this useful.]
As with forming opinions, I’m not claiming that doing research is the only or best way to learn to think with numbers (and some kinds of research wouldn’t help at all). But it is one possible way.
You need surface area to have an impact
Some of my research interests are motivated by wanting to improve the EA community. Over the course of this project, I realised that:
- I don’t actually have great data sources on ‘the EA community’.
- I don’t read forum very often, I’m not part of a local group, I haven’t been to EAG in a while, etc. My ideas about what the EA community is like mostly come from a small circle of friends, and some past interactions.
- I also don’t have a good understanding of which questions are live and action-relevant for relevant decision-makers.
- What are community-builders currently uncertain about? What are funders uncertain about? What’s the strength of evidence on the things that there’s currently consensus on? Which things have already been thought about, and which haven’t? Without knowing the answers to at least some of these things, it’s hard to pick useful questions to work on.
I ended up working on a set of questions where my mentor did have good surface area, and could guide me towards the action-relevant bits. If that hadn’t been the case, I think I wouldn’t have ended up doing any useful research.
My main takeaway here is that you need to have good surface area to do impactful research. Here is a non-exhaustive list of kinds of surface area it might be useful to seek, depending on your project:
- Talking to people who are decision makers over the thing you want to influence, or to people who talk to those people.
- Hanging out with relevant groups (e.g. biochemists if you want to advance biochemistry, EAs if you want to improve the EA community, a range of different people if you want to influence public opinion…).
- Talking to people who are experts.
- Reading the stuff that all of the above people read.
- Going to the places (conferences, meetups, fora) that the above people frequent.
Basically, make sure you’re in close contact with the people and ideas that are relevant for your work. Put like that, it’s a kind of obvious point, but I think it’s easy to neglect the social aspect, and think that if you just read the relevant peer-reviewed literature, you will have enough context. I don’t think this is true for research generally, and in particular for research that’s trying to have an impact. The cutting edge of useful questions will ~never be published in peer review because of how long that process takes. Besides, there’s often lots of nuance and tacit knowledge involved, which you can’t get at unless you actually spend time with the relevant people. (Probably there are other reasons too.)
I think there are two parts of the research process that surface area is particularly important for:
- It’s hard to identify useful questions if you don’t have surface area.
- Even if you answer useful questions well, if you don’t have surface area no one will know.
For early-stage researchers, I think this is especially worth bearing in mind when it comes to choosing mentors. Ideally, you want to find someone with more surface area than you on the thing you want to impact. Otherwise, there’s a high risk of working on stuff that is irrelevant, or that no one ever reads.
Working on someone else’s question is easier than working on your own
Part of the reason why I wanted to do some research in the first place was that I felt that I had a bunch of interesting and potentially useful ideas. It seemed natural then to work directly on those ideas.
What happened next was that I spent a long time on background reading, trying to refine questions, realising they needed more refining, and eventually getting beached and feeling like all of my questions were useless. I spent a week feeling bad and being quite unproductive, and then suddenly things turned around and I started doing directly useful things.
This turnaround didn’t happen because I finally figured out my own ideas: my mentor Damon just said ‘it’d be pretty useful for my research if I knew the answer to question x. How about you work on that for a bit?’
Things immediately got easier and more useful, and in retrospect I wish I’d tried harder at the beginning to get someone to mentor me on a question they cared about. I don’t have a coherent model here, but some things I’ll note:
- Getting to the front of a field is hard and time-consuming. You need to read and know a lot before you can ask the most interesting questions. Working for someone who’s done more of that reading than you means you can shortcut to more useful questions than you’d otherwise have accessible.
- Relatedly, choosing overall research directions is hard. If you try to go it alone, there’s a danger that you’ll spend your entire time on meta questions about what the most useful directions are, and never actually answer any object level questions.
It’s easy to get completely stuck
In the course of 3 months, I spent about a week genuinely stuck. I would start trying to do something, realise it was harder than I thought, and give up. Then I’d pick up something else, but while doing it I’d start to worry that it wasn’t actually worth doing at all. Sooner or later, I would just be staring at my screen, stuck. Occasionally I’d try to address the meta problem that I was stuck, but then I’d feel bad that I was spending so much time on meta and not making any object level progress, and go back to some object level thing, which I’d then get stuck on…
I had seen other people get stuck on their research, but deep down I sort of thought I was different. I didn’t seem to get stuck on my other work, and I thought of myself as a productive person who would be able to work through challenges, not get beached by them.
It turns out I am not different to those people, and I now finally get how easy it is to get stuck.
I think getting unstuck is very situation specific: perhaps the question is actually too hard, perhaps you’re right that it’s not terribly useful, or maybe you just lack confidence and need someone to tell you you’re doing fine. The way I got unstuck was by working on a question my mentor gave me instead of the stuff I was worrying about.
My main piece of general advice is, ask for help. In an ideal world, you have a mentor or manager who you can talk to about this. If you don’t, ask other researchers, or friends who you’ve found it useful to talk to in the past. Don’t despair if the first person you ask says no, or you have a conversation but it doesn’t help. Think who else might have useful insights, and ask them.
Meanwhile, go easy on yourself. If there are any lower priority tasks that feel easier to do, or robustly but only mildly useful, do those. Get some easy wins, read a few of the books on your ‘I wish I had time’ list, give feedback on other people’s work, write up the blog post you’ve been meaning to - anything that reduces the amount of time you’re staring at your screen feeling bad. It will pass.
Answering a question is harder than reasoning about other people’s answers
When I started on this research project, I found it much harder to make progress than I had done during my undergraduate and masters degrees.
In previous research work I was responding to existing literature, so there was already a framework for thinking about the question. I was usually doing one or multiple of:
- Answering a question that others had already answered
- Synthesising what other people thought the answer to a question was
- Taking some new evidence and relating it to arguments other people had already made
In some sense, my work was responding more to a paper world of existing literature and arguments than to the real, messy world.
For this research project, I was often trying to ask a question which started from the world, not existing literature. This meant that I needed to figure out a framework for my own thinking, which felt much harder to do.
(NB I think that often you can look at a question either from the real or the paper world perspective, and often the best approach involves a bit of both.)
Appendix: miscellaneous learnings
I also learned a variety of more minor things. I’m not going to write these up in detail, as I think the key learnings are more important and the post is easier to read if it covers fewer things, but if anyone comments that they are particularly interested in a given point, I’ll try to expand.
- Set goals more clearly at the beginning
- Switch more between input (reading) and processing (thinking)
- Laying stuff out well sometimes important for summarising and comparing what actually matters
- Explaining to someone else helps clarify what you think
- Being shown concretely what an output could look like is helpful
- Reasoning transparency is something that’s worth doing even for back of the envelope things
- Asking people who might know the answer to an object level question can give you information you otherwise wouldn’t be able to get
- Just a simple timeline or list of facts can help show the shape of a thing, what you know and don’t know, what questions it might be good to ask
- When I’m stuck, it’s helpful to break down the thing I’m trying to do into smaller parts
- I tried to review my notes from yesterday at the start of each day, to get a sense of what was useful and what wasn’t
- I like having a doc or piece of paper handy for writing down unprocessed things I could do next
- Monitors are basically essential for spreadsheet work
- Using timers is helpful for keeping on track
- Don’t check your email/task management system before work
Things I didn’t realise before
- Infohazardy stuff has a nasty emotional/social cost
- Emails can be interesting and chewy
Clearly there’s an important limit here. But I claim that you only need basic maths for sometimes thinking numerically to be an improvement on never thinking numerically.
Let’s say I think that there’s a serious risk that a particular kind of duck will go extinct. I can leave it there, or I can read lots of stuff about the duck. Probably I will still hold my initial position after reading, as my initial position is pretty vague and compatible with lots of different states of the world. If instead I try to work out how likely it is with numbers, I’ll quickly have to learn lots of new things: how many of this species of duck are there right now? At what rate are the ducks dying? What is the minimum viable population of these ducks? How is the population rate changing over time?
Let’s say that I decide that the ducks are dying for two reasons: disease, and hunting by humans. If I’m just reading about the duck, I might read a lot about each of these things. If I’m also thinking with numbers, I might realise that 90% of the death rate is explained by disease, and so while hunting is also a problem, it’s much more important to understand the disease part. (Later I might decide that the hunting 10% is more tractable than the disease 90% and so I still want to learn about the hunting. Later still, I might realise that even though it’s tractable to halve the hunting, that won’t make a big enough difference to save the duck.)
I look for some more numbers, and find some data on pollution levels in the kind of wetland that this duck lives in. The good news is, this duck lives in places that are cleaning up their act fast. So at first, I try to convert these numbers into a decrease in the death rate. Then I realise that the insects that carry the duck disease also thrive in unpolluted wetlands. So what will the net effect be? Come to think of it, might hunting also get more popular if the wetlands are cleaner and better preserved? Or maybe concern for wetland pollution correlates with concern for ducks, and so hunting will reduce? I realise I’m confused and need to think more carefully about the relationship between habitat quality and the risk of duck extinction. If I hadn’t been trying to pull these different threads together into a model, I might not have noticed that I don’t really understand the connections between habitat and extinction.
I end up with an estimate: 45% chance of extinction by 2050. I go to my friend the duck expert, and they say that that sounds way too high. Have I thought about the expected impact of all of us duck activists on duck populations? No, I haven’t. Later, I go back to my numbers and try to work out how many ducks I think various interventions can save. It changes my numbers quite a bit, and I’m down to more like 15% risk now. If I had gone to my duck expert friend with my initial opinion, ‘there’s a serious risk this duck will go extinct’, they would probably have agreed with me. Even if they had mentioned the interventions, it would have been easy for me to miss that I hadn’t been taking them into account in my previous thinking.
Sometimes it won’t be possible to find a mentor with more surface area than you (because no one is available, or because no one has more surface area than you because it’s a novel project), and sometimes doing irrelevant or unread research is worth doing instrumentally, for your own learning or to put on your CV.
You mention that there are lots of different kinds of research, but I think this is the key point about testing fit. I'm pretty shocked by how uncorrelated research competences are.
So even if you fail at (say) solo academic technical research, you should definitely try team / assistant / desk / blog / strategy / research management before you write off research in general.
I have a similar knee-jerk reaction whenever I read a post "on research", so I wrote up my experience with different types of research: https://forum.effectivealtruism.org/posts/pHnMXaKEstJGcKP2m/different-types-of-research-are-different
(I'm not at all trying to imply that Rose should have caveated more in her post.)
This seems like a useful point, thanks!
It makes me want to give a clarification: the reflections above are just the most important things I happened to learn - not a list of generally most important points to consider when testing fit for research. I think I'd need more research experience to write a good version of the latter thing (though I think my list probably overlaps with it somewhat).
I also want to respond to "you should definitely try [...] before you write off research in general". I think I agree with this, conditional on it being a sensible idea for you to be testing your fit for research in general in the first place. Some thoughts:
Agree with all of this
Thank you for writing this. I think this contains lots of good information for the people you are aiming at.
An interesting read might be this paper here: https://journals.biologists.com/jcs/article/121/11/1771/30038/The-importance-of-stupidity-in-scientific-research I think some of the struggles you ran into are just a part of doing research and do not make your fit for research smaller.
Thanks, I enjoyed that post (and it's quite short, for people considering whether to read).