Hide table of contents

Introduction

I used a recent Ask-Me-Anything (AMA) of Rethink Priorities to ask a series of questions about research in general (not limited to Rethink Priorities).

I’m posting these here severally to make them more visible. I’m not personally looking for more answers at this point, but if you think that readers would benefit from another perspective, I’d be delighted if you could add it.

Question

If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. (See this theory of mine.) Would you agree or do you have a different approach?

Jason Schukraft

I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.

David Bernard

Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.

This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!

Michael Aird

I don’t think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.

I’m somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.

I feel like EA might have a bit too much a tendency towards “think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it.” It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.

Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I’m tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.

(On this general topic, I liked the post The Neglected Virtue of Scholarship.)

Less important personal ramble:

I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.

But then I’ve repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it’s such an easily checkable thing!) And I’ve also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.

So maybe that feeling that I’m spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I’d (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking “Is this how I’d treat a friend?” in response to negative self-talk [source with related ideas].)

Alex Lintz

I’ve been playing around with spending 15–60 min. sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.

(If one of the answers is yours, you can post it below, and I’ll delete it here.)

New Answer
New Comment


2 Answers sorted by

Thanks for writing this! I think about this a lot, and this helped clarify the problem for me.

The problem can be summarized as: there's a couple competing forces. There's not wanting to re-invent the wheel. Humanity makes progress by standing on the shoulders of giants.

On the other side, there's 1) anchoring (not getting stuck in how people think about things in the field) and 2) benefits of having your own model (force you to think actively and helps guide your reading).

The problem we're trying to solve is how to get the benefits of both.

One potential solution is to start off with small amounts of thinking on your own, like Alex Lintz described, then spending time on consuming existing knowledge. Then you can alternate between creating and consuming, starting off with the bulk of your time in consuming, with short periods of creating interspersed throughout, and the time spent creating can get longer and longer as time progresses.

Schools already work this way to a large extent. Most of your time as an undergraduate you are simply reading existing literature and only doing occasional novel contributions. Then when you're a PhD student you're focused mostly on making new contributions.

However, I do think that formal education does this suboptimally. To think creatively is a skill, and like all skills, the more you practice, the better you get. If you've spent the first 16 years of your education more or less regurgitating pre-prackaged information, you're not going to be as good at coming up with new ideas once you're finally in position to than if you had been practicing along the way. This definitely cross-applies to EA.

[comment deleted]1
0
0

I lean toward: When in doubt, read first and read more. Ultimately it's a balance and the key is having the two in conversation. Read, then stop and think about what you read, organize it, write down questions, read more with those in mind.

But thinking a lot without reading is, I'd posit, a common trap that very smart people fall into. In my experience, smart people trained in science and engineering are especially susceptible when it comes to social problems--sometimes because they explicitly don't trust "softer" social science, and sometimes because they don't know where to look for things to read.

And that's key: where do you go to find things to read? If like me you suspect there's more risk of under-reading than under-thinking, then it becomes extra important to build better tools for finding the right things to read on a topic you're not yet familiar with. That's a challenge I'm working on, and one where there's very easy room for improvement.

Yeah, I broadly share those views.

Regarding your final paragraph, here are three posts you might find interesting on that topic:

(Of course, a huge amount has also been written on that topic by people outside of the EA and rationality communities, and I don't mean to imply that anyone should necessarily read those posts rather than good things written by people outside of thos... (read more)

Comments2
Sorted by Click to highlight new comments since:

I just want to thank you for taking the time to make this sequence. I think that the format is clear and beautiful and I'm interested to learn more about EA researchers' approach to doing research.

Thank you! Also for the answer on the first question! (And thanks for encouraging me to go for this format.)

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f