Work with me researching longtermist topics at Rethink Priorities!
- Researcher (Longtermism)
- Senior Research Manager (Longtermism)
- Researcher (AI Governance and Strategy)
- Fellow (AI Governance and Strategy)
Applications for all roles close on Sunday, October 24.
We're relatively new to the longtermist space, but I think we have:
- delivered some useful results already
- good norms and research culture
- a direct line to decision-makers.
Here are what I think are Rethink Priorities’ most salient pros and cons for researchers, relative to plausible counterfactuals:
- fully remote work (it was already remote before 2020)
- impactful decision-relevant research
- very interesting questions
- meaningful work
- above average work-life balance
- decent pay
- excellent benefits
- good mentorship
- great coworkers
- minimal bureaucracy
- fully remote work (I miss chatting with coworkers irl)
- less research flexibility than academia, FHI, or most independent research/blogging
- relatedly, research paradigms less well-suited to discovering drastic new revolutionary change
- low interest in publishing papers/academia
- no senior longtermist researchers
If you want to work with me, Michael Aird, Peter Wildeford, and others to help hack away at some of humanity's greatest problems, please consider applying to Rethink Priorities!
If you are interested in applying, please feel free to ask questions here! I will try my best to respond to most questions publicly on this forum so the system is reasonably fair and everybody can learn from the shared questions! I’ve also attached an FAQ here.
EDIT 2021/10/09: See other comments about working on the LT team from my coworker Michael and our former Visiting Fellow Lizka.
What is your best guess as to the difference between my (my specifically, but feel free to make your answer more general) impact if I worked with Research Priorities vs if I continued to work independently/with QURI?
(All views are my own. I'm not entirely sure how Marcus and Peter, and other people at RP, thinks about RP's impact; I know I've had at least one major disagreement before).
Oooh asking the tough questions here! The short answer is that you should probably just apply and learn about your own fit, RP as an institution, and what you think about stuff like our theory of change through the application process!
The longer answer is that I don't have a good sense of how good your counterfactual is. My understanding is that QURI's work is trying to do revolutionary change in epistemics, while RP's work* is more tightly scoped.
In addition, my best guess is that at this stage in your career, direct impact is likely less important than other concerns like career capital and personal motivation.
Still, for the sake of having some numbers to work with (even if the numbers are very loose and subjective, etc), here is a very preliminary attempt to estimate impact at RP, in case it's helpful for you or other applicants:
The easiest way to analyze RP impact is looking at our projects that aims to improve funder priorities, and guesstimating how much it improves decision quality.
When I do some back of the envelope calculations on the direct impact of an RP researcher, I get something like mid 6 figures to high 7 figures** in terms of improving decision quality for funders, with a (very not robust) median estimate in the 1-2M range.
I think this approach survives reasonable external validity checks, some of which will be in the higher end (Michael Aird has been working part-time as an EAIF guest manager, he's approximately indifferent between marginal time doing RP work vs marginal EAIF work, and amortizing his time on that would get you to the upper ends of that range) and some of which will be in the lower end (RP has had ~12 FTE researchers 6 months ago, EA overall is deploying 400M per year, saying RP's responsible for ~1.5% of the decision-quality of deployed capital feels much more intuitively defensible than saying we're responsible for ~$8M/researcher-year x 12 ~=25% of deployed capital***).
There are out-of-model reasons to go lower than say 1-2M, like replaceability arguments, and reasons to go higher, like RP is trying to expand in the future, so joining earlier is more helpful than later as you can help us while in the early stages to scale well while maintaining/improving culture and research quality. My current guess is that the reasons to go higher are slightly stronger.
So my very very loose and subjective guess is that you should maybe be indifferent between working at RP now as a junior longtermism researcher for a year vs say ~$2.2M**** of increase in EA capital on impact grounds. So one approach is considering whether ~$2.2M growth in EA capital is better or worse than your counterfactual, though again it's reasonably likely that career capital and motivation concerns might dominate.
*or at least the direct impact of our specific projects, I think one plausible good future/goal for "RP" as an institution is roughly trying to be a remote-first megaproject in research, which doesn't have close analogues (though "thinktank during covid" is maybe closest).
** Note that this is a range of different possible averages using different assumptions rather than a real 90% credible interval. I do think it's >5% likely that our work is net negative.
*** This is pretty loose, not all RP projects are meant to advise funders, also some of what's going on is that I'm forecasting that RP's research will impact 2-5 years of funding rather than 1 year of funding in some cases, like Neil's EU farmed legislation work.
**** Precision of estimates do not imply greater confidence.
Some thoughts from my own perspective (again, not necessarily RP-wide views):
I agree with Linch that this is a tricky question. I also agree with a lot of the specific things he says (though not all). My own brief reply would be:
(I think this is basically what I'd say regarding someone about whom I know nothing except that they work at QURI in a substantial capacity - not just e.g. intern. I.e., I don't think my knowledge of you specifically alters my response here.)
For onlookers, note that QURI has ~2 FTEs or so, so Michael isn't exactly anonymizing a lot.
(I didn't mean just existing QURI staff - I meant like imagining that I'd stopped paying attention to QURI's staff for a year but still knew their work in some sense and knew they had 1-4 people other than Ozzie, or something. I guess you'd have to imagine I knew the output scaled up to match the number of people, and that it seemed to me each non-Ozzie employee was contributing ~equally to the best of my knowledge, and there's probably tricky things around management or seniority levels, but hopefully people get what I'm gesturing at.)
I think I agree with everything you said here! Thanks for putting it so succinctly!
[This comment is less important, and you may want to skip it]
Some places where my views differ from Linch's comment:
For what it's worth, I don't think those two are the same thing, but I usually think of "improving decision quality" of situations that roughly looks like a funder wants to invest in $X in Y, we look at the evidence and suggest something like
as some percentage improvement on $X in the first two cases, and I usually think of that percentage as less than 100%. Maybe 20-50%*? Depends on funder quality. So I don't think "adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target" are the same thing, but you're implying >100% improvement and I usually think improvements are lower, especially in expectation. Keep in mind that there's usually at least an additional grantmaker layer between donors and the people doing direct work, and we have to be careful to avoid double-counting (which I was maybe a bit sloppy at too but worth noting).
The other thing to note is that this "decision quality" approach already might inflate our importance at least a little (compared to a more natural question candidates might ask like at what $X amount should they be indifferent between working for us and earning-to-give for $X) because it implies that the cause prioritization of EA is already basically reasonable, and I don't actually believe this, in either my research or my other career/life decisions.
A different tack here is a quick sanity check: maybe it has happened a few times before, but I'm not aware of any point that an RP employee was so confident about an intervention/donation opportunity that they've researched that they decided that the donation opportunity is a clearly better bet than RP. Obviously there are self-serving reasons/biases for this, but I basically think this is a directionally correct move from the POV of the universe.
* I need to check if how much I can share, but 20% is not the lowest number I've seen from other people at RP, at least when I talk about specific intervention reports.
Yeah, this is a good point that I think I hadn't had saliently in mind, which feels a bit embarrassing.
I think the important, correct core of what I was saying there is just that "the value of adding 1 dollar to the pool of EA resources" and "the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)" are not necessarily the same, and plausibly actually differ a lot, and also the value of the latter thing will itself differ a lot depending on various specifics. I think it's way less clear which of the two things is bigger and by how much, and I guess I'd now back down from even my tentative claims above and instead mostly shrug.
(EDIT: I realised I should note that I have more reasons behind my originally stated views than I gave, basically related to my EAIF work. But I haven't given those reasons here, and overall my views on this are pretty unstable and not super informed.)
I agree with this.
Cheers, and thanks to both you and Michael for going into some depth.
+1. I found this discussion really interesting and am happy it happened publicly.
Obviously, I like both groups a lot and think they both have different (though overlapping) benefits.
Question that came up 3x+ in Twitter/texts/emails:
As I'm partially responsible for hiring for this round, our internal policy is that I shouldn't have calls with potential candidates for this role about the job, as there's a risk that I will accidentally leak information about our hiring criteria in a private call, and this will be unfair for candidates who aren't in my network.
Please let me know if you think this policy is bad; we're constantly re-evaluating our policies to balance various factors, including efficiency and fairness.
One thing I'd add: It seems likely to me that if a call is worth the person's time + Linch's time, then it's actually a better move to just apply. The initial application stage takes 2 hours or less, and people will learn through the application process, and at each stage a substantial fraction of people don't proceed (cutting down the total time cost and making the question "Is this job a good fit for me?" less relevant to ask). If a 30 min call is worthwhile, it seems like the conclusion will probably be "may as well apply", in which case I'd just skip to "may as well apply".
I'm still very happy for people to reach out to me, but mainly so I can encourage them to go ahead and apply! (Assuming they indeed think a 30 min call would be worth their time. I obviously don't think every single person in EA should apply.)
Also, to be clear, I don't say this just for Rethink specifically or for orgs I work at specifically. E.g., yesterday I said roughly the same thing to someone who was interested in working at the Center on Long-Term Risk and wanted to know what working there was like for me - I encouraged them to probably just apply to work there instead of spending more time thinking about whether they'd want to accept an offer in the unlikely event they receive one, and to focus our call on other things.
(I think one actually check whether that argument seems sound by laying out the various time costs, learning benefits, and probabilities, and I haven't done this, but I'd guess this is basically correct.)
+1 to thinking one should just apply. Related: I think it’s usually still useful to hear about the perspective of more rather than less potential coworkers and wonder if it makes sense for orgs to have a „This is what our employees say it’s like to work here“ document. I would’ve read that before applying, but I wouldn’t have reached out about it. Though I guess learning about the perspectives of your potential co-workers is a nice way to get to know them once you’re far enough into the application process?
fwiw, I do think it seems good to have publicly available info (not just via 1-1s) regarding what various people at the org think it's like to work there, what a "day in the life" is like, etc. More generally, it seems good for people to write job profiles and "typical workday" writeups. Though I'm unsure how useful those things are and whether RP should prioritise producing more such things.
Some things that partially cover those needs which RP already has:
Thanks for the pointers, I hadn’t seen Lizka‘s shortform and found it very useful for my evaluation of working at RP.
Somebody in DMs asked:
My answer: Yep! I think it makes a lot of sense for some people to do technical AI alignment research and we're not the best home for it, at least for the near future.
Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.
I'm not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there's anything we can do to be more transparent, please let us know!
I just totally missed that the info was in the job ads -- so thank you very much for providing that information, it's really great to see. Sorry for missing it the first time around!
No problem - sorry that wasn't clear!
Feel free to apply if the salary range and other job relevant job details make sense for your personal and professional priorities!
If I was writing a similar comment, I think I would choose to consider writing instead of “reducing variance” instead something like “improving efficiency and transparency, so organizations and candidates can maximize impact”.
Maybe instead of “standardized roles and compensation across a lot of organizations” I would say something like “mature market arising from impactful organizations so that candidates have a useful expectation of wage”. (E.g. The sense that a seasoned software developer knows what she could get paid in the Bay Area and it’s not just some uniform prior between $50k and $10M).
So the main perspective for why this is relevant is shown by this comment chain where Gregory Lewis has a final comment, and his comment seems correct.
The rest of this comment is low effort and a ramble, isn’t on anyone to know, but I think I will continue to write because it’s just good to know about, or something. Why I think someone would care about this:
But wait there’s more!
While the above is probably true, here’s some facts that make it even more awkward:
Well, so anyways this is all a thing for these CEOs, executive directors and grant makers to work out.
It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.
I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
It's worth noting that we have tripled pay since our 2020 Form 990 (covering 2019). CEO pay is currently $103,959/yr.
For onlookers, I will again note that our current pay range is 65k-85k for researchers. which is >2x 33k, though not quite 3x.
There are various reasons for the difference between our historically (and currently and probably future) lower rates of pay than Lightcone, including but not limited to a) being fully remote instead of based fully in one of the most expensive cities on the planet and b) most of RP's historical focus being on animal welfare, where there is significantly less of a funding overhang than in x-risk circles and c) most of our employees (not myself) have counterfactuals in academia or other nonprofits rather than the tech or finance sector.
That said, I (personally) directionally agree with you that more pay for some of the earlier people is probably a good idea. At a risk of sounding obsequious, I do think there's a strong case that Peter and Marcus and some of the other early employees ought to a) get some risk compensation for developing RP into the institution that it is today or b) have higher salaries to help them make better time-value tradeoffs or c) both.
For people wondering and who haven't clicked through to the job ads on the website, below is the compensation ranges for the Researcher roles:
Hmm job applications are closed now, but I think I want to partially retract the
After a natural resting point in my other research, I felt a bit of intuitive unease about jumping into AI gov immediately. So I've been mostly doing brainstorming, contacting people about weird ideas etc. I've done this with the full support of Peter, Michael, and other people at RP. So I think there's more research flexibility at RP for people who want it and can justify it than I previously thought; a bigger question is how much you want it.
I wanted to push back a bit on the "cons" part
I’m not sure if this was meant to reflect only the long-termist team than RP or RP as a whole. I think these points are possibly too strongly stated.
In the Google doc you had some caveats to this. There are people at RP, and people joining RP who are moderately to very active in academic publication. Of course we don't place a great deal of importance on “hitting publication targets as points” as academia does. However, my impression is that we do place a high value on some of the gains from the peer review and “publication” process such as:
(This is why I'm looking for/pursuing a solution that allows us to have these gains without the wasteful and tedious parts of the academic publication process)
In terms of the 'research flexibility' point it is arguably less in one dimension, but more flexible in another dimension. There are some types of research (and 'impact of research) that you can do at RP that you cannot really do in academia, or at least you will be doing 'on your own time and at your own risk'.
In academia you may be constrained to produce 'research that academics think adds to the theoretical model' or that is 'trendy and will publish well'. You cannot necessarily survive in academia if you pursue
It might be more possible to do the above sorts of research and research-adjacent work at RP than in academia.
Yeah, fwiw, I personally feel I've had a lot of flexibility at Rethink (though I'll probably have somewhat less going forward, partly because I myself think that that'd be better), and I personally see "low interest in publishing papers/academia" as either irrelevant or mildly a pro for me personally. Though I do expect both of those things would vary between people.
This has inspired me to write another comment with my own personal list of pros and cons of working at RP.
Less internal incentive, true. But I'm not sure that they have so much less residual time to engage in the important parts of research that could lead to academic publications (or academic-quality work) vis a vis academia itself.
Academics tend to work round the clock, and have important teaching and administrative responsibilities that can also be very distracting from research. I think you probably could pursue an academic-level publication agenda at RP, at least as well as in many university academic jobs.
Speaking just about the publishing point, when I was trying to leverage my old network to help recruit for our past global health & development hiring rounds, it was definitely the case that developmental economists (who have a robust pre-existing non-EA academic field) viewed not being as incentivized in the org to publish academic papers as a noticeable negative.
It's possible this is not as relevant for the longtermist roles, as a) I expect our candidate pool to on average be more EA and b) the academic fields for longtermist topics, outside of a few narrow subniches in existing disciplines, are noticeably less robust.
One plus I forgot to mention for academic applicants is that in addition to the minimal bureaucracy, we (obviously) have no mandatory teaching load. Researchers who want to "teach" can have interns or do mentorship calls with junior applicants.
I see that you and MichaelA both see low academic incentives or inclinations as a plus, rather than a cost to be paid. For me personally, this is all a moot point as I'm reasonably confident (especially now) that I can perform at the level of e.g. PhDs at top universities at doing RP-style work, whereas I think I will not be able to perform at that level in academia.
FYI - the link looking for/pursuing a solution goes to https://forum.effectivealtruism.org/users/bit.ly/unjournal which gives me a 404 Not Found
Ah, EA forum appends itself to the beginning of Urls if you don't put the HTTP thing.
But I'm trying to move the project into an 'action space' gitbook wiki HERE: https://app.gitbook.com/o/-MfFk4CTSGwVOPkwnRgx/s/-MkORcaM5xGxmrnczq25/
to start to get this in motion
Your FAQ contains the sentence:
But my understanding is that hiring people internationally is tricky to very tricky. Can you double-check with your operations team that this is still the case?
It’s definitely the case that we can hire people in most countries (though some countries have additional considerations we have to account for, like whether the person has working hours that will overlap with their manager, some financial / logistical constraints, etc), and we are happy to review any candidate’s specific questions about their particular location on a case by case basis if folks want to reach out to email@example.com. For reference, we currently have staff in the US, Canada, Mexico, Spain, UK, Switzerland, Germany, and New Zealand.
Fwiw, here's my own personal list of pros and cons of RP relative to other options I could imagine myself pursuing (e.g., working at FHI [full-time and for longer than I currently plan to], CEA, EA Funds, Founders Pledge, think tanks, or academia).
Though note that this is anchored by Linch's list, I didn't spend very long thinking about it, and pros and cons will differ between people. Also, many of these pros and cons also apply to some of my other options.
Also this isn't specific to the longtermism team but Rethink Priorities is hiring for a Communications Coordinator! You can help our researchers and managers save time and leverage better connections by communicating with stakeholders, including other researchers, key decision-makers, and donors. Also help us write better job ads!
A good candidate for this role might have or will soon develop not just a comparative advantage to some of our researchers at communications, but also an absolute advantage as well. If you ever think to yourself "man this Linch guy sounds really dumb on the internet, I hope he's not representing anything important," well a) tough luck and b) this could change! Now's your chance to help make me sound less dumb!
Question that came up 2x:
We will probably have summer fellowships on the longtermism team but both the fellowship program and our team is quite young and we can't promise good fellowships for summer just yet. This depends on our priorities, management/general capacity, and how good our projects are.
Someone on Twitter asks:
My response: Pretty much all the time! I think if we recast the research arm of EA as "the science of doing good, construed broadly" it should be pretty obvious that there are many more research questions than we have the dedicated research hours to answer.
Going into problem-solving mode here, but have you considered setting up physical office space somewhere (SF/DC/Ox/Lon)? Trying to share office space with other longtermist researchers?
The short answer is that I don't actually miss having coworkers more than I like having no commute etc.
Though I've considered WeWorking with Ozzie from QURI etc but never got around to it.
Something that makes coordination a bit harder is that the Lightcone team in Berkeley has set up a pretty good space for AI alignment people so many of the people I might want to cowork with already have a really good BATNA.
I think this might change if RP hires longtermists from the Bay Area and we can have a nucleus of actual coworkers here, or alternatively if people congregate in Oxford I might move to join Michael A et al.
Some things to add to Linch's comment:
Please take all appropriate COVID precautions (ensuring everyone is vaccinated, getting rapid tests if available)."
Relevant: How much slower is remote work?