Hide table of contents

Work with me researching longtermist topics at Rethink Priorities!

Applications for all roles close on Sunday, October 24.

We're relatively new to the longtermist space, but I think we have:

  • delivered some useful results already
  • good norms and research culture
  • a direct line to decision-makers.

Here are what I think are Rethink Priorities’ most salient pros and cons for researchers, relative to plausible counterfactuals:

Pros

  • fully remote work (it was already remote before 2020)
  • impactful decision-relevant research
  • very interesting questions
  • meaningful work
  • above average work-life balance
  • decent pay
  • excellent benefits
  • good mentorship
  • great coworkers
  • minimal bureaucracy

Cons

  • fully remote work (I miss chatting with coworkers irl)
  • less research flexibility than academia, FHI, or most independent research/blogging
  • relatedly, research paradigms less well-suited to discovering drastic new revolutionary change
  • low interest in publishing papers/academia
  • no senior longtermist researchers

If you want to work with me, Michael Aird, Peter Wildeford, and others to help hack away at some of humanity's greatest problems, please consider applying to Rethink Priorities!

If you are interested in applying, please feel free to ask questions here! I will try my best to respond to most questions publicly on this forum so the system is reasonably fair and everybody can learn from the shared questions! I’ve also attached an FAQ here.

EDIT 2021/10/09: See other comments about working on the LT team from my coworker Michael and our former Visiting Fellow Lizka

Comments46
Sorted by Click to highlight new comments since: Today at 1:34 PM

Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.

I'm not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there's anything we can do to be more transparent, please let us know!

I just totally missed that the info was in the job ads -- so thank you very much for providing that information, it's really great to see.  Sorry for missing it the first time around!

No problem - sorry that wasn't clear!

Feel free to apply if the salary range and other job relevant job details make sense for your personal and professional priorities! 

For people wondering and who haven't clicked through to the job ads on the website, below is the compensation ranges for the Researcher roles:

We do not require candidates to have formal academic credentials to be successful. We are hiring for three levels of experience:

  • Associate Researcher - ~1 year of research experience. The salary for this level is $65,000/yr to $70,000/yr, prorated for part-time work.
  • Researcher - either a relevant Masters degree and/or ~2 years of research experience. Experience working in one of our priority topics in an industry setting would count. The salary for this level is between $70,000/yr and $77,000/yr, depending on years of experience and the nature of your qualifications, prorated for part-time work.
  • Senior Researcher - either a relevant PhD degree and/or 5+ years of research experience. The salary for this level is between $77,000/yr and $85,000/yr.

I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.

 

Eh....

If I was writing a similar comment, I think I would choose to consider writing instead of “reducing variance” instead something like “improving efficiency and transparency, so organizations and candidates can maximize impact”. 

Maybe instead of “standardized roles and compensation across a lot of organizations” I would say something like “mature market arising from impactful organizations so that candidates have a useful expectation of wage”. (E.g. The sense that a seasoned software developer knows what she could get paid in the Bay Area and it’s not just some uniform prior between $50k and $10M).

So the main perspective for why this is relevant is shown by this comment chain where Gregory Lewis has a final comment, and his comment seems correct.

 

Uh.

The rest of this comment is low effort and a ramble, isn’t on anyone to know, but I think I will continue to write because it’s just good to know about, or something. Why I think someone would care about this:

  • Depending on the cruxes of whether you accept the relevant worldview/cause area/models of talent, I think the impact and salaries being talked about here, driven by tails (e.g. “400k to 4M”) would make it unworkable to have “standardized” salaries or “ensure pay equity” that most people would mean. Like, salary caps wouldn’t work out, people would just create new entities or something, and it would just add a whole layer of chicanery.
     
  • Credibility of the EA movement seems important, so it’s good to be aware of things like “anti-trust”, “fiduciary duty” and as Gregory Lewis puts it, “colourably illegal”. Knowing what these do would be useful if you are trying to build institutions and speak to institutions to edit AI policy and literally stop WW3.

 

But wait there’s more! 

While the above is probably true, here’s some facts that make it even more awkward:

  • The count of distinct funders for AI and longtermist EA initiatives is approximately one and a half. So creating a bunch of entities (consultancies, think tanks) that have a pretty “normalized” price of labor is probably justified in some theoretical, practical and virtuous sense.
     
  • The salary issue is not academic. Right now, an EA org seems to be flat out willing to pay 70% of US tech sector wages, lifted by the historic FAANG event (e.g. “we would be open to paying $315k for someone who would make $450k in industry”). This seems right and virtuous in this worldview. At the same time, there is a segment of EA with a strong frugal aesthetic that is also virtuous, and importantly, older with many staff. So, despite dismissing both central planning and “equity” above, a laissez faire sort of approach is going to be unwieldy. What will happen is that the comp gradient will create structural issues. 
     

Well, so anyways this is all a thing for these CEOs, executive directors and grant makers to work out. 

It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.

I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
 

It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.

I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.

 

It's worth noting that we have tripled pay since our 2020 Form 990 (covering 2019). CEO pay is currently $103,959/yr.

It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.

I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.

For onlookers, I will again note that our current pay range is 65k-85k for researchers. which is >2x 33k, though not quite 3x.

There are various reasons for the difference between our historically (and currently and probably future) lower rates of pay than Lightcone, including but not limited to a) being fully remote instead of based fully in one of the most expensive cities on the planet and b) most of RP's historical focus being on animal welfare, where there is significantly less of a funding overhang than in x-risk circles and c) most of our employees (not myself) have counterfactuals in academia or other nonprofits rather than the tech or finance sector.

That said, I (personally) directionally agree with you that more pay for some of the earlier people is probably a good idea. At a risk of sounding obsequious, I do think there's  a strong case that Peter and Marcus and some of the other early employees ought to a) get some risk compensation for developing RP into the institution that it is today or b) have higher salaries to help them make better time-value tradeoffs or c) both.

What is your best guess as to the difference between my (my specifically, but feel free to make your answer more general) impact if I worked with Research Priorities vs if I continued to work independently/with QURI?

(All views are my own. I'm not entirely sure how Marcus and Peter, and other people at RP, thinks about RP's impact; I know I've had at least one major disagreement before).

Oooh asking the tough questions here! The short answer is that you should probably just apply and learn about your own fit, RP as an institution, and what you think about stuff like our theory of change through the application process! 

The longer answer is that I don't have a good sense of how good your counterfactual is. My understanding is that QURI's work is trying to do revolutionary change in epistemics, while RP's work* is more tightly scoped. 

In addition, my best guess is that at this stage in your career, direct impact is likely less important than other concerns like career capital and personal motivation.

Still, for the sake of having some numbers to work with (even if the numbers are very loose and subjective, etc), here is a very preliminary attempt to estimate impact at RP, in case it's helpful for you or other applicants:

The easiest way to analyze RP impact is looking at our projects that aims to improve funder priorities, and guesstimating how much it improves decision quality.

When I do some back of the envelope calculations on the direct impact of an RP researcher, I get something like mid 6 figures to high 7 figures** in terms of improving decision quality for funders, with a (very not robust) median estimate in the 1-2M range. 

I think this approach survives reasonable external validity checks, some of which will be in the higher end (Michael Aird has been working part-time as an EAIF guest manager, he's approximately indifferent between marginal time doing RP work vs marginal EAIF work, and amortizing his time on that would get you to the upper ends of that range) and some of which will be in the lower end (RP has had ~12 FTE researchers 6 months ago, EA overall is deploying 400M per year, saying RP's responsible for ~1.5% of the decision-quality of deployed capital feels much more intuitively defensible than saying we're responsible for ~$8M/researcher-year x 12 ~=25% of deployed capital***). 

There are out-of-model reasons to go lower than say 1-2M, like replaceability arguments, and reasons to go higher, like RP is trying to expand in the future, so joining earlier is more helpful than later as you can help us while in the early stages to scale well while maintaining/improving culture and research quality. My current guess is that the reasons to go higher are slightly stronger.

So my very very loose and subjective guess is that you should maybe be indifferent between working at RP now as a junior longtermism researcher for a year vs say ~$2.2M**** of increase in EA capital on impact grounds. So one approach is considering whether ~$2.2M growth in EA capital is better or worse than your counterfactual, though again it's reasonably likely that career capital and motivation concerns might dominate. 

*or at least the direct impact of our specific projects, I think one plausible good future/goal for "RP" as an institution is roughly trying to be a remote-first megaproject in research, which doesn't have close analogues (though "thinktank during covid" is maybe closest). 
** Note that this is a range of different possible averages using different assumptions rather than a real 90% credible interval. I do think it's >5% likely that our work is net negative. 
*** This is pretty loose, not all RP projects are meant to advise funders, also some of what's going on is that I'm forecasting that RP's research will impact 2-5 years of funding rather than 1 year of funding in some cases, like Neil's EU farmed legislation work. 
**** Precision of estimates do not imply greater confidence.

Some thoughts from my own perspective (again, not necessarily RP-wide views):

I agree with Linch that this is a tricky question. I also agree with a lot of the specific things he says (though not all). My own brief reply would be: 

  • It seems unclear to me whether you'd have more impact in the short-term and in the long-term if you work at QURI or Rethink next year.
  • Therefore, applying, seeing what happens, and thinking harder about an offer only if and when you get it seems probably worthwhile (assuming you're somewhat interested, of course).
  • I think if you got an offer, I'd see you either accepting or declining it as quite reasonable, and I'd want you to just get the best info you can and then make your own informed decision.
  • I'm pretty excited about QURI's work, so this is a matter of me thinking both options seem pretty great.

(I think this is basically what I'd say regarding someone about whom I know nothing except that they work at QURI in a substantial capacity - not just e.g. intern. I.e., I don't think my knowledge of you specifically alters my response here.)

[This comment is less important, and you may want to skip it]

Some places where my views differ from Linch's comment:

  • In my role with EAIF,~$750k of grants that I was primary investigator for have been approved. This was from ~10hrs/week of work over ~4 months. So that's equivalent to (hopefully!) improving the allocation of ~$9m over a year of full-time work. Yet I think I'd go full-time Rethink rather than continue to be part-time EAIF, from next year onwards, if given the choice. This gives some indication of how valuable I think me working at Rethink full-time next year is, which then more weakly indicates (a) what's true and (b) how valuable other people working at Rethink is. And it seems to suggest something notably higher than the 1-2M range.
    • There are complexities to all of this, which I can get into if people want, but I think that picture ends up roughly correct.
    • But note that this has a lot to do with my long-term career plans (with research management as plan A), rather than just comparing direct impact during 2022 alone.
    • Also note that my dollars moved at EAIF and my hours worked for EAIF have both been above average, I think, partly because I've been keen to take on extra things so I can learn more and because it's fun.
    • Also note that I'm very glad I did this term at EAIF, have strongly recommended other people apply to do a stint at EA Funds, and would definitely consider an extended term at EAIF if offered it (though I think I'd ultimately lean against).
  • Linch's comment could be (mis?)read as equating the value of adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while). But I think he doesn't actually mean that (given that he uses 1-2M in one case and 2.2M in the other). And I think I'd (likewise?) say that the latter is probably better than the former, given that I think EA funders would be keen to spend much faster than they currently are if they had more vetting, ideas, strategic clarity, etc.
    • But I haven't thought much about this, and it probably depends on lots of specifics (e.g. are you "just" guiding a dollar that would be moved anyway to instead be moved to a 10% better opportunity, or are you suggesting a totally new intervention idea that someone can then active grantmake an org into existence to execute, or are you improving strategic clarity sufficiently to unlock money that would otherwise sit around due to fear of downside risks?). Not sure though.
    • I also haven't tried to make any of the estimates Linch tried to make above. But I appreciate him having done so and acknowledge that that makes it easily to productively disagree with him than with my more vague statements!

Linch's comment could be (mis?)read as equating the value of adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)

For what it's worth, I don't think  those two are the same thing, but I usually think of "improving decision quality" of situations that roughly looks like a funder wants to invest in $X in Y, we look at the evidence and suggest  something like

a) best bets are Z charities/interventions in Y
b) this isn't worth investing for ABC reasons, or 
c) ambiguous, more research is needed

as some percentage improvement on $X in the first two cases, and I usually think of that percentage as less than 100%. Maybe 20-50%*? Depends on funder quality. So I don't think "adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target" are the same thing, but you're implying >100% improvement and I usually think improvements are lower, especially in expectation.  Keep in mind that there's usually at least an additional grantmaker layer between donors and the people doing direct work, and we have to be careful to avoid double-counting (which I was maybe a bit sloppy at too but worth noting).

The other thing to note is that this "decision quality" approach already might inflate our importance at least a little (compared to a more natural question candidates might ask like at what $X amount should they be indifferent between working for us and earning-to-give for $X) because it implies that the cause prioritization of EA is already basically reasonable, and I don't actually believe this,  in either my research or my other career/life decisions. 

A different tack here is a quick sanity check: maybe it has happened a few times before, but I'm not aware of any point that an RP employee was so confident about an intervention/donation opportunity that they've researched that they decided that the donation opportunity is a clearly better bet than RP. Obviously there are self-serving reasons/biases for this, but I basically think this is a directionally correct move from the POV of the universe. 

* I need to check if how much I can share, but 20% is not the lowest number I've seen from other people at RP, at least when I talk about specific intervention reports.

Keep in mind that there's usually at least an additional grantmaker layer between donors and the people doing direct work, and we have to be careful to avoid double-counting (which I was maybe a bit sloppy at too but worth noting).

Yeah, this is a good point that I think I hadn't had saliently in mind, which feels a bit embarrassing. 

I think the important, correct core of what I was saying there is just that "the value of adding 1 dollar to the pool of EA resources" and "the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)" are not necessarily the same, and plausibly actually differ a lot, and also the value of the latter thing will itself differ a lot depending on various specifics. I think it's way less clear which of the two things is bigger and by how much, and I guess I'd now back down from even my tentative claims above and instead mostly shrug.

(EDIT: I realised I should note that I have more reasons behind my originally stated views than I gave, basically related to my EAIF work. But I haven't given those reasons here, and overall my views on this are pretty unstable and not super informed.)

I think the important, correct core of what I was saying there is just that "the value of adding 1 dollar to the pool of EA resources" and "the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)" are not necessarily the same

I agree with this.

I think I agree with everything you said here! Thanks for putting it so succinctly! 

I think this is basically what I'd say regarding someone about whom I know nothing except that they work at QURI in a substantial capacity - not just e.g. intern.

For onlookers, note that QURI has ~2 FTEs or so, so Michael isn't exactly anonymizing a lot.

(I didn't mean just existing QURI staff - I meant like imagining that I'd stopped paying attention to QURI's staff for a year but still knew their work in some sense and knew they had 1-4 people other than Ozzie, or something. I guess you'd have to imagine I knew the output scaled up to match the number of people, and that it seemed to me each non-Ozzie employee was contributing ~equally to the best of my knowledge, and there's probably tricky things around management or seniority levels, but hopefully people get what I'm gesturing at.)

Cheers, and thanks to both you and Michael for going into some depth.

+1. I found this discussion really interesting and am happy it happened publicly. 

Obviously, I like both groups a lot and think they both have different (though overlapping) benefits.

Hmm job applications are closed now, but I think I want to partially retract the

less research flexibility than academia, FHI, or most independent research/blogging

point. 

After a natural resting point in my other research, I felt a bit of intuitive unease about jumping into AI gov immediately. So I've been mostly doing brainstorming, contacting people about weird ideas etc. I've done this with the full support of Peter, Michael, and other people at RP. So I think there's more research flexibility at RP for people who want it and can justify it than I previously thought; a bigger question is how much you want it. 
 

Fwiw, here's my own personal list of pros and cons of RP relative to other options I could imagine myself pursuing (e.g., working at FHI [full-time and for longer than I currently plan to], CEA, EA Funds, Founders Pledge, think tanks, or academia). 

Though note that this is anchored by Linch's list, I didn't spend very long thinking about it, and pros and cons will differ between people. Also, many of these pros and cons also apply to some of my other options.

Pros

  • Good chance of playing an important role in a team that has a plausible chance of scaling a fair bit per year for several years, while maintaining similar quality and impact per person, and while making good use of people with less experience in research or in EA
  • Impactful decision-relevant research
  • Research on very interesting questions
  • Substantial degree of autonomy, both within my main work (e.g., I can basically decide for myself that I should drop some post I'd been planning to write and write a different post instead) and between my main work and "tangents" (e.g., I can decide to spend some time creating and delivering a workshop to some EA research training program)
  • That autonomy is paired with good feedback, including critical feedback where warranted
    • I'd feel more existential dread about my impact etc., plus actually just have less impact, if just given autonomy without much guidance and feedback.
      • Sometimes I get really excited about things that I shouldn't be that excited about, miss other great ideas, or could take a 20% better angle on something.
      • My manager and others at RP help me correct for these things
  • Good coworkers, Slack workspace, mentorship, and "structures" (e.g., project plan templates)
  • I like writing and publishing/sharing things I've written, and this job lets me do lots of that (in particular, more than grantmaking does)
  • More intellectually stimulating and "deep" feeling than some parts of grantmaking
  • No requirement to publish papers etc.
  • Pretty EA-ish, informal, harmless-weirdness-accepting work culture
  • Extremely flexible schedule (e.g., can easily have my "weekend" on whatever days my partner has off work, which are often weekdays, then work on the weekend)
  • Fully remote (makes it easy for me to join my partner on trips back to Australia and work while there [while she performs at festivals, such that we wouldn't be spending the whole time together anyway])
  • Good pay (well above FHI, and roughly on par with my other likely options)

Cons

  • Many of the other places I could potentially work have their own versions of the above pros, and I don't get to get those at Rethink
  • Small team so far, so to date I've often not had someone at the org who has expertise in issues I'm working on (particularly for nuclear risk)
    • A notable exception is forecasting, where I've regularly been able to quickly get great input from Rethink people
    • In any case, I'm able to get good "generalist" input from Rethink people and to build lots of connections beyond Rethink to get their input
  • No senior longtermist researchers
  • Decent chance me doing grantmaking would be more directly impactful in the near term than my Rethink work
    • But hard to say, and I think this is outweighed by the fact I'm probably best off aiming for research management long-term
  • Grantmaking involves a more diverse and random array of tasks and faster feedback loops, plus more of a feeling of direct and visible impact, plus the impact often involves identifiable humans (the grantees) being happy and helped and grateful. That can all be fun and mildly addictive
  • Fully remote
    • I'm pretty ok with this, but there are perks to e.g. being at Trajan House (where FHI and many other EA orgs are located)
    • OTOH, I'm currently often working at Trajan House anyway, and expect to often do so next year anyway

Also this isn't specific to the longtermism team but Rethink Priorities is hiring for a Communications Coordinator!  You can help our researchers and managers save time and leverage better connections by communicating with stakeholders, including other researchers, key decision-makers, and donors. Also help us write better job ads!

A good candidate for this role might have or will soon develop not just a comparative advantage to some of our researchers at communications, but also an absolute advantage as well. If you ever think to yourself "man this Linch guy sounds really dumb on the internet, I hope he's not representing anything important," well a) tough luck and b) this could change! Now's your chance to help make me sound less dumb!

Somebody in DMs asked:

I guess what I really want to do is technical AI alignment research and my guess is that's not what you're hiring for

My answer: Yep! I think it makes a lot of sense for some people to do technical AI alignment research and we're not the best home for it, at least for the near future. 

Someone on Twitter asks:

are there any knowledge gaps that you think would be interesting to explore, but no one has the time?

My response: Pretty much all the time! I think if we recast the research arm of EA as "the science of doing good, construed broadly" it should be pretty obvious that there are many more research questions than we have the dedicated research hours to answer.

I wanted to push back a bit on the "cons" part

You wrote:

Cons

less research flexibility than academia, FHI, or most independent research/blogging low interest in publishing papers/academia

I’m not sure if this was meant to reflect only the long-termist team than RP or RP as a whole. I think these points are possibly too strongly stated.

In the Google doc you had some caveats to this. There are people at RP, and people joining RP who are moderately to very active in academic publication. Of course we don't place a great deal of importance on “hitting publication targets as points” as academia does. However, my impression is that we do place a high value on some of the gains from the peer review and “publication” process such as:

  • Credibility and prestige for our work and our employees
  • and the influence generated by this.
  • Feedback and suggestions that improve our work
  • Visibility of our work to key audiences
  • Attracting academics and scholars to RP and to the issues and topics we care about

(This is why I'm looking for/pursuing a solution that allows us to have these gains without the wasteful and tedious parts of the academic publication process)

In terms of the 'research flexibility' point it is arguably less in one dimension, but more flexible in another dimension. There are some types of research (and 'impact of research) that you can do at RP that you cannot really do in academia, or at least you will be doing 'on your own time and at your own risk'.

In academia you may be constrained to produce 'research that academics think adds to the theoretical model' or that is 'trendy and will publish well'. You cannot necessarily survive in academia if you pursue

  • research where you apply an existing model and technique without making it more complicated in an 'interesting way'
  • meta-analysis, Fermi estimates, dissemination and translation of existing research
  • continuing to build and improve your model and estimates in ways that practically inform the policy goals of interest
  • applying research to the world's most important problems
  • new initiatives, programs, and policy ideas that are 'doing something' rather than 'asking a deeply interesting question within an academic paradigm'

It might be more possible to do the above sorts of research and research-adjacent work at RP than in academia.

Ah, EA forum appends itself to the beginning of Urls if you don't put the HTTP thing.

It's https://bit.ly/unjournal

But I'm trying to move the project into an 'action space' gitbook wiki HERE: https://app.gitbook.com/o/-MfFk4CTSGwVOPkwnRgx/s/-MkORcaM5xGxmrnczq25/

to start to get this in motion

Speaking just about the publishing point, when I was trying to leverage my old network to help recruit for our past global health & development hiring rounds, it was definitely the case that developmental economists (who have a robust pre-existing non-EA academic field) viewed not being as incentivized in the org to publish academic papers as a noticeable negative. 

It's possible this is not as relevant for the longtermist roles, as a) I expect our candidate pool to on average be more EA and b) the academic fields for longtermist topics, outside of a few narrow subniches in existing disciplines, are noticeably less robust. 

One plus I forgot to mention for academic applicants is that in addition to the minimal bureaucracy, we (obviously) have no mandatory teaching load. Researchers who want to "teach" can have interns or do mentorship calls with junior applicants. 

I see that you and MichaelA both see low academic incentives or inclinations as a plus, rather than a cost to be paid. For me personally, this is all a moot point as I'm reasonably confident (especially now) that I can perform at the level of e.g. PhDs at top universities at doing RP-style work, whereas I think I will not be able to perform at that level in academia.

Yeah, fwiw, I personally feel I've had a lot of flexibility at Rethink (though I'll probably have somewhat less going forward, partly because I myself think that that'd be better), and I personally see "low interest in publishing papers/academia" as either irrelevant or mildly a pro for me personally. Though I do expect both of those things would vary between people. 

This has inspired me to write another comment with my own personal list of pros and cons of working at RP. 

Speaking just about the publishing point, when I was trying to leverage my old network to help recruit for our past global health & development hiring rounds, it was definitely the case that development[] economists (who have a robust pre-existing non-EA academic field) viewed not being as incentivized in the org to publish academic papers as a noticeable negative.

Less internal incentive, true. But I'm not sure that they have so much less residual time to engage in the important parts of research that could lead to academic publications (or academic-quality work) vis a vis academia itself.

Academics tend to work round the clock, and have important teaching and administrative responsibilities that can also be very distracting from research. I think you probably could pursue an academic-level publication agenda at RP, at least as well as in many university academic jobs.

Your FAQ contains the sentence:

RP is a fully remote organization and very dedicated to being flexible to a large range of possible researcher and life priorities. We can legally hire from most places that candidates are likely to be from.

But my understanding is that hiring people internationally is tricky to very tricky. Can you double-check with your operations team that this is still the case?

It’s definitely the case that we can hire people in most countries (though some countries have additional considerations we have to account for, like whether the person has working hours that will overlap with their manager, some financial / logistical constraints, etc), and we are happy to review any candidate’s specific questions about their particular location on a case by case basis if folks want to reach out to info@rethinkpriorities.org. For reference, we currently have staff in the US, Canada, Mexico, Spain, UK, Switzerland, Germany, and New Zealand.

Question that came up 2x:

I was wondering if RP has any plans (that you can share) to offer any more internships/fellowships in longtermism in the next year.

My response:

We will probably have summer fellowships on the longtermism team but both the fellowship program and our team is quite young and we can't promise good fellowships for summer just yet. This depends on our priorities, management/general capacity, and how good our projects are.

Question that came up 3x+ in Twitter/texts/emails:

Can you go on a call with me to discuss whether this job is a good fit for me?

My response:

As I'm partially responsible for hiring for this round, our internal policy is that I shouldn't have calls with potential candidates for this role about the job, as there's a risk that I will accidentally leak information about our hiring criteria in a private call, and this will be unfair for candidates who aren't in my network. 

Please let me know if you think this policy is bad; we're constantly re-evaluating our policies to balance various factors, including efficiency and fairness.

One thing I'd add: It seems likely to me that if a call is worth the person's time + Linch's time, then it's actually a better move to just apply. The initial application stage takes 2 hours or less, and people will learn through the application process, and at each stage a substantial fraction of people don't proceed (cutting down the total time cost and making the question "Is this job a good fit for me?" less relevant to ask). If a 30 min call is worthwhile, it seems like the conclusion will probably be "may as well apply", in which case I'd just skip to "may as well apply".

I'm still very happy for people to reach out to me, but mainly so I can encourage them to go ahead and apply! (Assuming they indeed think a 30 min call would be worth their time. I obviously don't think every single person in EA should apply.)

Also, to be clear, I don't say this just for Rethink specifically or for orgs I work at specifically. E.g., yesterday I said roughly the same thing to someone who was interested in working at the Center on Long-Term Risk and wanted to know what working there was like for me - I encouraged them to probably just apply to work there instead of spending more time thinking about whether they'd want to accept an offer in the unlikely event they receive one, and to focus our call on other things. 

(I think one actually check whether that argument seems sound by laying out the various time costs, learning benefits, and probabilities, and I haven't done this, but I'd guess this is basically correct.)

+1 to thinking one should just apply. Related: I think it’s usually still useful to hear about the perspective of more rather than less potential coworkers and wonder if it makes sense for orgs to have a „This is what our employees say it’s like to work here“ document. I would’ve read that before applying, but I wouldn’t have reached out about it. Though I guess learning about the perspectives of your potential co-workers is a nice way to get to know them once you’re far enough into the application process?

fwiw, I do think it seems good to have publicly available info (not just via 1-1s) regarding what various people at the org think it's like to work there,  what a "day in the life" is like, etc. More generally, it seems good for people to write job profiles and "typical workday" writeups. Though I'm unsure how useful those things are and whether RP should prioritise producing more such things. 

Some things that partially cover those needs which RP already has:

Thanks for the pointers, I hadn’t seen Lizka‘s shortform and found it very useful for my evaluation of working at RP. 

  • fully remote work (I miss chatting with coworkers irl)
  • no senior longtermist researchers

Going into problem-solving mode here, but have you considered setting up physical office space somewhere (SF/DC/Ox/Lon)? Trying to share office space with other longtermist researchers?

The short answer is that I don't actually miss having coworkers more than I like having no commute etc. 

Though I've considered WeWorking with Ozzie from QURI etc but never got around to it. 

Something that makes coordination a bit harder is that the Lightcone team in Berkeley has set up a pretty good space for AI alignment people so many of the people I might want to cowork with already have a really good BATNA. 

I think this might change if RP hires longtermists from the Bay Area and we can have a nucleus of actual coworkers here, or alternatively if people congregate in Oxford I might move to join Michael A et al. 

Some things to add to Linch's comment:

  • I personally didn't mind being fully remote for a while when I was in Australia, though I do think many/most people benefit somewhat from in-person (holding other factors constant, e.g. ignoring that some people can't/won't move and that commuting sucks).
  • In light of the imperfections of remote work, Rethink recently introduced:
    • "a stipend of $2000 annually per staff member to staff to work in the same location as other staff. This can be coordinated however you please (in a reasonably cost minimizing way), and can cover travel, lodging, and food for the trip. To use the benefit, you can coordinate with folks, then get approval from your manager. Then just request [staff member] to purchase flights, etc. or submit a reimbursement request.
      Please take all appropriate COVID precautions (ensuring everyone is vaccinated, getting rapid tests if available)."
    • (Copied from a Slack message)
  • I think Rethink usually have one or two in-person staff retreats a year, though this has been disrupted by COVID and I only joined last November, so other people would know more. (This year we'll have remote "retreats" instead, due to COVID.)
  • People at Rethink definitely can cowork with other Rethink people or EAs if they want, and this does happen.
    • E.g., the moral weight team travel to spend a day or so together every few months, or something like that.
    • E.g., some of the ops people seem to visit each other sometimes, as multiple of them are located around Philadelphia.
    • E.g., I work at Trajan House (the building that contains FHI, GPI, GovAI, CEA, Forethought, and often other EA stuff) because I work part-time with FHI as well.
      • One of my interns also did their Rethink work from Trajan House, aided by being a ~5hr/week contractor for CEA (but they were clearly at Trajan for much more than 5hr/week).
      • I'm very confident I'll be able to continue working at Trajan House after I leave FHI (though not yet sure if I'll be able to get a space in an office or "just" in a coworking space), at least for a while.
      • I feel fairly confident that other Rethink employees who wanted to work at Trajan House could do so as well, at least for a while.
    • As Linch indicates, I think this will probably become more common as Rethink grows, especially if that growth turns out to be somewhat concentrated in some places (most likely Oxbridge and/or the Bay Area in the case of the longtermism team)
More from Linch
Curated and popular this week
Relevant opportunities