Hide table of contents

Hi all,

We're the staff at Rethink Priorities and we would like you to Ask Us Anything! We'll be answering all questions starting Friday, November 19.

About the Org

Rethink Priorities is an EA research organization focused on helping improve decisions among funders and key decision-makers within EA and EA-aligned organizations. You might know of our work on quantifying the number of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all of our work to date here.

Over the next few years, we’re expanding our farmed animal welfare and moral weight research programs, launching an AI governance and strategy research program, and continuing to grow our new global health and development wing (including evaluating climate change interventions).

Team

You can find bios of our team members here. Links on names below go to RP publications by the author (if any are publicly available at this point).

Leadership

  • Marcus Davis — Co-CEO — Focus on animal welfare and operations
  • Peter Wildeford — Co-CEO — Focus on longtermism, global health and development, surveys, and EA movement research

Animal Welfare

  • Dr. Kim Cuddington — Senior Ecologist — Wild animal welfare
  • Dr. William McAuliffe — Senior Research Manager — Wild animal welfare, farmed animal welfare
  • Jacob Peacock — Senior Research Manager — Farmed animal welfare
  • Dr. Jason Schukraft — Senior Research Manager — Moral weight, global health and development
  • Daniela Waldhorn — Senior Research Manager — Invertebrate welfare, farmed animal welfare
  • Dr. Neil Dullaghan — Senior Researcher — Farmed animal welfare
  • Dr. Samara Mendez — Senior Researcher — Farmed animal welfare
  • Saulius Šimčikas — Senior Researcher — Farmed animal welfare
  • Meghan Barrett — Entomology Specialist — Invertebrate welfare
  • Dr. Holly Elmore — Researcher — Wild animal welfare
  • Michael St. Jules — Associate Researcher — Farmed animal welfare

Longtermism

  • Michael Aird — Researcher — Nuclear war, AI governance and strategy
  • Linch Zhang — Researcher — Forecasting, AI governance and strategy

Surveys and EA movement research

  • David Moss — Principal Research Director — Surveys and EA movement research
  • Dr. David Reinstein — Senior Economist — EA Survey, effective giving research
  • Dr. Jamie Elsey — Senior Behavioral Scientist — Surveys
  • Dr. Willem Sleegers — Senior Behavioral Scientist — Surveys

Global Health and Development

  • Dr. Greer Gosnell — Senior Environmental Economist — Climate change, global health interventions
  • Ruby Dickson — Researcher — Global health interventions
  • Jenny Kudymowa — Researcher — Global health interventions
  • Bruce Tsai — Researcher — Climate change, global health interventions

Operations

  • Abraham Rowe — COO — Operations, finance, HR, development, communications
  • Janique Behman — Director of Development — Development, communications
  • Dr. Dominika Krupocin — Senior People and Culture Coordinator — HR
  • Carolina Salazar — Project and Hiring Manager — HR, project management
  • Romina Giel — Operations Associate — Operations, finance

Ask Us Anything

Please ask us anything — about the org and how we operate, about the staff, about our research… anything!

You can read more about us in our 2021 Impact and 2022 Strategy update or visit our website: rethinkpriorities.org.

If you're interested in hearing more, please subscribe to our newsletter.

Also, we’re currently raising funds to continue growing in 2022. We consider ourselves funding constrained — we continue to get far more qualified applicants to our roles than we are able to hire, and have scalable infrastructure to support far more research. We accept and track restricted funds by cause area if that is of interest.

If you'd like to support our work, visit https://www.rethinkpriorities.org/donate, give on Giving Tuesday via Facebook to potentially secure matching funds, or email Janique Behman at janique@rethinkpriorities.org.

We'll be answering all questions starting Friday, November 19.

Comments135
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In your yearly report you mention:

Rethink Priorities has been trusted by EA Funds and Open Philanthropy to start new projects (e.g., on capacity for welfare of different animal species) and open entire new departments (such as AI governance).

These and other large organizations often only fund 25–50% of our needs in any particular area because they trust our ability to find other sources of funding. Therefore we rely on a broad range of individual donors to continue our work.

This surprised me, because I fairly often hear the advice of "donate to EA Funds" as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?

I think donating to the EA Funds is a very good thing to do, but I don't think every donor should do this. I think for donors who have the time and personal fit, it would be good to do some direct donations on your own and support organizations to help those organizations hedge against idiosyncratic risk from particular funders and help give them more individual support (which matters for showing proof to other funders and also matters for some IRS stuff).

I don't think any one funder likes to fund the entirety of an organization's budget, especially when that budget is large. But between the different institutional funders (EA Funds, Survival and Flourishing Fund, OpenPhil, etc.), I still think there is a strong (but not guaranteed) chance we will be funded (at least enough to meet somewhere between our "Low" and "High" budget amounts). Though if everyone assumed we were not funding constrained, than we definitely would be.

My other pitch is that I'd like RP, as an organization, to have some direct financial incentive and accountability to the EA community as a whole, above and beyond our specific institutional funders who have specific desires and fund us for specific reasons that ... (read more)

A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you've hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?

So there are a bunch of questions in this, but I can answer some of the ops related one:

  • We haven't had ops talent bottlenecks. We've had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I'd guess that 80%+ of our finalists are at least familiar with EA (which I don't  think is a necessary requirement, but the explanation isn't that we are recruiting from a different pool I guess).
    • Maybe there was a bigger bottleneck in ~2018 and EA has grown a lot since or reached people with more ops skills since?
    • We spend a lot of time resources on recruiting, and advertise our jobs really widely, so maybe we are reaching a lot more potential candidates than some other organizations were?
  • Management bottlenecks are probably our biggest current people-related constraint on growth (funding is a bigger constraint).
    • We've worked a lot on addressing this over the summer, partially by having a huge internship program, and getting a lot of current staff management experience (while also working with awesome interns on cool projects!) and sending  anyone who wants it through basic managemen
... (read more)

Here's some parts of my personal take (which overlaps with what Abraham said):

I think we ourselves feel a bit unsure "why we're special", i.e. why it seems there aren't very many other EA-aligned orgs scaling this rapidly & gracefully.

But my guess is that some of the main factors are:

  • We want to scale rapidly & gracefully
    • Some orgs have a more niche purpose that doesn't really require scaling, or may be led by people who are more skilled and excited about their object-level work than about org strategy, scaling, management, etc.
  • RP thinks strategically about how to scale rapidly & gracefully, including thinking ahead about what RP will need later and what might break by default
    • Three of the examples I often give are ones Abraham mentioned:
      • Realising RP will be be management capacity constrained, and that it would therefore be valuable to give our researchers management experience (so they can see how much they like it & get better at it), and that this pushes in favour of running a large internship with 1-1 management of the interns
        • (This definitely wasn't the only motivation for running the internship, but I think it was one of the main ones, though that's partly guessin
... (read more)

I have private information (e.g. from senior people at Rethink Priorities and former colleagues) that suggests operations ability at RP is unusually high. They say that  Abraham Rowe, COO, is unusually good.

The reason why this comment is useful is that:

  • This high operations ability might be hard to observe from the inside, if you are that person (Rowe) who is really good. Also, high ability operations people may be attracted to a place where things run well and operations is respected. There may be other founder effects from Rowe. This might add nuance to Rowe's comment.
  • It seems possible operations talent was (is) limited or undervalued in EA. Maybe RP's success is related to operations ability (allows management to focus, increases org-wide happiness and confidence).

I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that's probably helped us maintain a high quality of output as we grow.

I also think a huge part of RP's success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.

7
MichaelDickens
To the extent that you think good operations can emerge out of replicable processes rather than singularly talented ops managers, do you think it would be useful to write a longer article about how RP does operations? (Or perhaps you've already written this and I missed it)
2
abrahamrowe
This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).

I definitely think that we are very lucky to have Abraham working with us. I think another thing is that there are at least three people (Abraham, Marcus, me, and probably other people too if given the chance) each capable of founding and running an organization all focused instead on making just one organization really great and big.

I definitely think having Abraham be able to fully handle operations allows Marcus and me to focus nearly entirely on driving our research quality, which is a good thing. Marcus and I also have clear subfocuses (Marcus does animals and global health / development, whereas I focus on longtermism, surveys, and EA movement building) which allow us to further focus our time specifically on making things great.

6
MichaelA
This comment sounds like it's partly implying "RP seems to have recently overcome these bottlenecks. How? Does that imply the bottlenecks are in general smaller now than they were then?" I think the situation is more like "The bottlenecks were there back then and still are now. RP was doing unusually well at overcoming the bottlenecks then and still is now." The rest of this comment says a bit more on that front, but doesn't really directly answer your question. I do have some thoughts that are more like direct answers, but other people at RP are better placed to comment so I'll wait till they do so and then maybe add a couple things.  (Note that I focus mostly on longtermism and EA meta; maybe I'd say different things if I focused more on other cause areas.) ---------------------------------------- In late 2020, I was given three quite exciting job offers, and ultimately chose to go with a combo of the offer from RP and the offer from FHI, with Plan A being to then leave FHI after ~1 year to be a full-time RP employee. (I was upfront with everyone about this plan. I can explain the reasoning more if people are interested.) The single biggest reason I prioritised RP was that I believe the following three things: 1. "EA indeed seems most constrained by things like 'management capacity' and 'org capacity' (see e.g. the various things linked to from scalably using labor). 2. I seem well-suited to eventually helping address that via things like doing research management. 3. RP seems unusually good at bypassing these bottlenecks and scaling fairly rapidly while maintaining high quality standards, and I could help it continue to do so." I continue to think that those things were true then and still are now (and so still have the same Plan A & turn down other exciting opportunities).  That said, the picture regarding the bottlenecks is a bit complicated. In brief, I think that:  * The EA community overall has made more progress than I expected at increasing th

To what extent do you think a greater number of organisations conducting similar research to RP would be useful to promote healthy dialogue? Compared to having one specialist organisation in a field who is the go-to for certain questions. 

I'll let Peter/Marcus/others give the organizational answer, but speaking for myself I'm pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR). 

  1. The EA movement has a strong appetite for research consultancy work, and RP is far from sufficient for meeting all the needs of the movement. 
  2. RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
    1. For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
  3. We have a particular hiring bar. It's plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us. 
    1. For example, having a higher hiring bar may allow you to create a small tight-knit group of superge
... (read more)

also I have a strong suspicion that a lot of needed research work in EA "just isn't that hard" and if it's done by less competent people, this frees up other EA researchers to do more important work.

I agree with that suspicion, especially if we include things like "Just collect a bunch of stuff in one place" or "Just summarise some stuff" as "research". I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: "I’m addicted to creating collections"). It basically always feel like (a) a lot of other people could've done what I'm doing and (b) it's kinda crazy no one had yet. I also sometimes don't have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically "research" outputs than collections or summaries are.

But I want to push back on "this frees up other EA researchers to do more important work". I think you probably mean "this frees up ot... (read more)

Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I'm really glad that since then CEA has significantly built up their local group support. But it's a perennial pitfall to watch out for.

6
Linch
I think this is probably true. One thing to flag here is people's counterfactuals are not necessarily in research. I think one belief that I recently updated towards but haven't fully incorporated in my decision-making is that for a non-trivial subset of EAs in prominent org positions (particularly STEM-trained  risk-neutral Americans with elite networks), counterfactuals might be more like expected E2G earnings more in the mid-7 figures or so* than the low- to mid- 6 figures I was previously assuming. *to be clear, almost all of this is EV is in the high upside things, very few people make 7 figures working jobby jobs.
2
MichaelA
I agree on all points (except the nit-pick in my other comment). A couple things I'd add: * I think this thread could be misread as "Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?"  * If that was the question, I wouldn't actually be sure what the best answer would be - I think it'd be necessary to look at the specifics, e.g. what are the other org's specific plans, who are their founders, etc.?  * Another tricky question would be something like "Should [specific person] join RP with an eye to helping it scale further, join some org that's not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?"Any of those three options could be best depending on the person and on other specifics. * But what I'm more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like. * Somewhat relatedly, I'd guess that "reduced communication" and "PR" aren't the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I'm guessing you (Linch) would agree; I'm just aiming to counter a possible inference.) * Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there's substantial fixed costs to creating each new org. * See also this interesting comment thread. * But again, ultimately I do think there should be more new RP-like orgs being started (if started by fitting people with access to good advisors etc.)
5
MichaelA
One other thing I'd add to Linch's comments, adapting something I wrote in another comment in this AMA: If anyone feels like maybe they're the right sort of person to (co-)found a new RP-like org, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully.  Some evidence that I really am keen on this is that I've spent probably ~10 hours of my free time over the last few months helping a particular person work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. (Though that was an unusual case and I'd usually just quickly offer my highest-value input.)
4
Linch
Quick clarifying question: is  referring to RP, or more field-specific organizations like e.g. CSET or an (AFAIK, hypothetical) organization focused on answering questions on medical approaches to existential biosecurity.  Put another way, is your question asking about larger RP vs RP + several RP clones, or RP + several RP clones vs. RP + several specialist organizations? 
3
JamesÖz
Thanks for the clarifying question. I meant larger RP vs RP+ several RP clones (basically new EA research orgs that do cause/intervention/strategy prioritisation).  The case of larger RP vs RP + several specialist organisations is also interesting though - slightly analogous to the scenario of 80K and Animal Advocacy Careers. I wonder in a hypothetical world where 80K was more focused on animal welfare, would/should they defer all animal interested people to AAC as they have greater domain expertise or should they advise some animal people themselves as they bring a slightly different lens to the issue? The relevant comparison might be RP and Wild Animal Initiative for example.

Do you also feel funding constrained in the longtermist portion of your work? (Conventional wisdom is that neartermist causes are more funding constrained than longtermist ones.)

Mostly yes. It definitely is the case that, if we were given more cash than the cash that we already have, we could meaningfully accelerate our longtermism team in a way that we cannot do with the cash we currently have. Thus funding is still an important constraint to scaling our work, in addition to some other important constraints.

However, I am moderately confident that between the existing institutional funders (OpenPhil, Survival and Flourishing Fund, Long-Term Future Fund, Longview, and others) that we could meet a lot of our funding request - we just haven’t asked yet. But (1) it’s not a guarantee that this would go well so we’d still appreciate money from other sources, (2) it would be good to add some diversity from these sources, (3) money from other sources could help us spend less time fundraising and more time accelerating our longtermism plans, (4) more funding sooner could help us expand sooner and with more certainty, and (5) its likely we could still spend more money than these sources would give.

This comment matches my view (perhaps unsurprisingly!). 

One thing I'd add: I think Peter is basically talking about our "Longtermism Department". We also have a "Surveys and EA Movement Research Department". And I feel confident they could do a bunch of additional high-value longtermist work if given more funding. And donors could provide funding restricted to just longtermist survey projects or even just specific longtermist survey projects (either commissioning a specific project or funding a specific idea we already have).

(I feel like I should add a conflict of interest statement that I work at RP, but I guess that should be obvious enough from context! And conversely I should mention that I don't work in the survey department, haven't met them in-person, and decided of my own volition to write this comment because I really do think this seems like probably a good donation target.)

Here are some claims that feed into my conclusion:

  • Funding constraints: My impression is that that department is more funding constrained than the longtermism department
    • (To be clear, I'm not saying the longtermism department isn't at all funding constrained, nor that that single factor guarantees t
... (read more)

Assume you had uncapped funding to hire staff at RP from now on. In such a scenario, how many more staff would you expect RP to have in 5 years from now? How much more funding would you expect to attract? Would you sustain your level of impact per dollar? 

For instance, is it the case that you think that RP could be 2x as large in five years and do 3x as much funded work at a 1.5x current impact per dollar? Or a very different trajectory?

I ask as an attempt to gauge your perception of the potential growth of RP and this sector of EA more generally.  

It’s been hard for me to make five year plans, given that we’re currently only a little less than four years old and the growth between 2018 when we started and now has already been very hard to anticipate in advance!

I do think that RP could be 2x as large in five years. I’m actually optimistic that we could double in 2-3 years!

I’m less sure about how much funded work we’d do - actually I’m not sure what you mean by funded work, do you mean work directly commissioned by stakeholders as opposed to us doing work we proactively identify?

I’m also less sure about impact per dollar. We’ve found this to be very difficult to track and quantify precisely. Perhaps as 80,000 Hours talks about “impact-adjusted career changes”, we might want to talk about “impact-adjusted decision changes” - and I’d be keen to generate more of those, even after adjusting for our growth in staff and funding. I think we’ve learned a lot more about how to unlock impact from our work and I think also there will have been more time for our past work to bear fruit.

8
Linch
One additional point I'll note is that most (though not all ) of our impact comes from having a multiplier effect on the EA movement.  Unlike say a charity distributing bednets, or an academic trying to answer ML questions in AI safety, our impact is inherently tied with the impact of EA overall. So an important way we'll have a greater impact per dollar (without making many changes ourselves) is via the movement growing a lot in quantity, quality, or both.  Put another way, RP is trying to have a multiplier effect on the EA movement, but multiplication is less valuable than addition if the base is low. A third way in which we rely on the EA movement (the second one is money) is that almost all of our hires comes from EA, so if EA outreach to research talent dries up (or decreases in quality), we'd have a harder time finding competent hires. 
1
PeterSlattery
Thanks, that's exciting to hear!  For funded work, I wanted to know how much funding you expect to receive to do work for stakeholders.  

This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.

But in terms of just pure contracted work, in 2021 so far, we've received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.

In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we'd see growth growth in donations and grants too.

How valuable do you think your research to date has been? Which few pieces of your research to date have been highest-impact? What has surprised you or been noteworthy about the impact of your research?

3
Peter Wildeford
I think we cover this in our 2021 Impact and 2022 Strategy update!

By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.

Do you have any insights that explain your success and quality, especially that might inform other organizations or founders?

Alternatively, is your success due to intrinsically high founder quality, which is harder to explain?

By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.

Thanks Charles for your unprompted, sincere, honest, and level-headed assessment. 

Your check will be in the mail in 3-7 business days. 

2
Charles He
Yes, thank you, kind sir.

Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:

  • We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
  • We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
  • We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
  • We try to make RP a genuinely pleasant place to work for everyone on our staff.

As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.

8
MichaelA
(This other comment of mine is also relevant here, i.e. if answering these questions quickly I'd say roughly what I said there. Also keen to see what other RP people say - I think these are good questions.)

What are the top 2-3 issues Rethink Priorities is facing that prevent you from achieving your goals? What are you currently doing to work on these issues?

I think to better expand Rethink Priorities, we need Rethink Priorities to be bigger and more efficient.

I think the relevant constraints for "why aren't we bigger?" are:

(1): sufficient number of talented researchers that we can hire

(2): sufficient number of useful research questions we can tackle

(3): ability to ensure each employee has a positive and productive experience (basically, people management constraints and project management constraints)

(4): ops capacity - ensuring our ops team is large enough to support the team

(5): Ops and culture throughput - giving the ops enough time to onboard people (regardless of ops team size), giving people enough time to adapt to the org growth ...that is, even if we were otherwise unconstrained I still think we can't just 10x in one year because that would just feel too ludicrous

(6): proof/traction (to both ourselves and to our external stakeholders/funders) that we are on the right path and "deserve" to scale (this also just takes time)

(7): money to pay for all of the above

~

It doesn't look like (1) or (2) will constrain us anytime soon.

My guess is that (3) is our current most important constraint but that we are working by experimenting with... (read more)

1
Madhav Malhotra
This is very well-communicated! Thank you for taking the time to type all that out and label the responses :-) Regarding (3) - making each employee happy and productive Are there any examples of organisations that you aspire to model RP's practices after? Ie. Exemplars of how to "be bigger and more efficient" while making each employee happy and productive? *I ask because I'd love to learn about real-life management  cultures/tools to grow my skillset  :-)
1
Janique
I've seen Peter, our Co-CEO, highlight Netflix culture as something that inspired him: https://jobs.netflix.com/culture  
5
Peter Wildeford
I'd clarify that I was inspired by that particular document - especially for the large employee ownership - but I'm much less inspired by the culture at Netflix as I hear from some employees that it is actually practiced.

What lessons would you pass onto other EA orgs from running an internship program?

Thanks so much for this question!

We have learned a lot during our Fellowship/Internship Program. Several main considerations come to mind when thinking about running a fellowship/internship program.

  • Managers’ capacity and preparedness – hosting a fellow/intern may be a rewarding experience. However, working with fellows/interns is also time-consuming. It seems to be important to keep in mind that managers may need to have a dedicated portion of time to:
    • Prepare for their fellows/interns’ arrival, which may include drafting a work plan, thinking about goals for their supervisees, and establishing a plan B, in case something unexpected comes up (for example, data is delayed, and the analysis cannot take place)
    • Explain tasks/projects, help set goals, and brainstorm ideas on how to achieve these goals
    • Regularly meet with their fellows/interns to check in, monitor progress, as well as provide feedback and overall support/guidance throughout the program
    • Help fellows/interns socialize and interact with others to make them feel included, welcomed, and a part of the team/organization.
  • Operations team capacity and preparedness – there are many different tasks associated with each stage of the fell
... (read more)

Two things I'd add to the above answer (which I agree with):

  • RP surveyed both interns and their managers at the end of the program, which provided a bunch of useful takeaways for future internships. (Many of which are detailed or idiosyncratic and so will be useful to us but aren't in the above reply.) I'd say other internship programs should do the same.
    • I'd personally also suggest surveying the interns and maybe managers at the start of the internship to get a "baseline" measure of things like interns' clarity on their career plans and managers' perceived management skills, then asking similar questions at the end, so that you can later see how much the internship program benefitted those things. Of course this should be tailored to the goals of a particular program.
  • What lessons we should pass on to other orgs / research training programs will vary based on the type of org, type of program, cause area focus, and various other details. If someone is actually running or seriously considering running a relevant program and would be interested in lessons from RP's experience, I'd suggest they reach out! I'd be happy to chat, and I imagine other RP people might too.
3
MichaelA
Good question! Please enjoy me not answering it and instead lightly adapting an email I sent to someone who was interested in running an EA-aligned research training program, since you or people interested in your question might find this a bit useful. (Hopefully someone else from RP will more directly answer the question.) "Cool that you're interested in doing this kind of project :) I'd encourage you to join the EA Research Training Program Slack workspace and share your plans and key uncertainties there to get input from other people who are organizing or hoping to organize research training programs. [This is open only to people organizing or seriously considering organizing such programs; readers should message me if they'd like a link.] You could also perhaps look for people who've introduced themselves there and who it might be especially useful to talk to. Resources from one of the pinned posts in that Slack: You might also find these things useful:  * Michael's quick notes on RP's internship, RP's processes, how RP picks research projects, etc. [for SERI etc.] * Collection of collections of resources relevant to (research) management, mentorship, training, etc. * Improving the EA-aligned research pipeline I'd also encourage you to seriously consider applying for funding, doing so sooner than you might by default, and maybe even applying for a small amount of funding to pay for your time further planning this stuff (if that'd be helpful). Basically, I think people underestimate the extent to which EA Funds are ok with unpolished applications, with discussing and advising on ideas with applicants after the application is submitted, and with providing "planning grants". (I haven't read anything about your plans and so am not saying I'm confident you'll get funding, but applying is very often worthwhile in expectation.) More info here: * https://forum.effectivealtruism.org/posts/DqwxrdyQxcMQ8P2rD/list-of-ea-funding-opportunities * https://forum.eff

Why do you have the distribution of focus on health/development vs animals vs longtermism vs meta-stuff that you do? How do you feel about it? What might make you change this distribution, or add or remove priority areas?

8
Marcus_A_Davis
Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that. I think the current distribution of what we work on is dependent on a number of factors, including but not limited to: 1. What we think about research opportunities in each space 2. What we think about the opportunity to exert meaningful influence in the space 3. Funding opportunities 4. Our ability to hire people In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provided the good opportunities arise to do so. We do have opinions on high level cause prioritization (though I know there’s some disagreement inside RP about this topic) but I think given the changing nature of marginal value of additional work in any given the above considerations, and others, we meld our work (and staff) to where we think we can have the highest impact. In general, though this is fairly generic and high level, were we to come to think our in a given area wasn’t useful or the opportunity cost were too high to continue to work on it, we would decide to pursue other things. Similarly, if the reverse was true for some particular possible projects we weren’t working on, we would take them on
4
Zach Stein-Perlman
Thanks for your reply. I think (1) and (2) are doing a ton of work — they largely determine whether expected marginal research is astronomically important or not. So I'll ask a more pointed follow-up: Why does RP think it has reason to spend significant resources on both shorttermist and longtermist issues (or is this misleading; e.g., do all of your unrestricted funds go to just one)? What are your "opinions on high level cause prioritization" and the "disagreement inside RP about this topic"? What would make RP focus more exclusively on either short-term or long-term issues?
8
MichaelA
[This is not at all an organizational view; just some thoughts from me] tl;dr: I think mostly RP is able to grow in multiple areas at once without there being strong tradeoffs between them (for reasons including that RP is good at scaling & that the pools of funding and talent for each cause area are somewhat different). And I'm glad it's done so, since I'd guess that may have contributed to RP starting and scaling up the longtermism department (even though naively I'd now prefer RP be more longtermist). I think RP is unusually good at scaling, at being a modular collection of somewhat disconnected departments focusing on quite different things and each growing and doing great stuff, and at meeting the specific needs of actors making big decisions (especially EA funders; note that RP also does well at other kinds of work, but this type of work is where RP seems most unusual in EA).  Given that, it could well make sense for RP to be somewhat agnostic between the major EA causes, since it can meet major needs in each, and adding each department doesn't very strongly trade off against expanding other departments.  (I'd guess there's at least some tradeoff, but it's possible there's none or that it's on-net complementary; e.g. there are some cases where people liking our work in one area helped us get funding or hires for another area, and having lots of staff with many areas of expertise in the same org can be useful for getting feedback etc. One thing to bear in mind here is that, as noted elsewhere in this AMA, there's a lot of funding and "junior talent" theoretically available in EA and RP seems unusually good at combining these things to produce solid outputs.) I would personally like RP to focus much more exclusively on longtermism. And sometimes I feel a vague pull to advocate for that. But RP's more cause-neutral, partly demand-driven approach has worked out very well from my perspective so far, in that it may have contributed to RP moving into longtermism

What is your process for identifying and prioritizing new research questions? And what percentage of your work is going toward internal top priorities vs. commissioned projects?

[This is like commentary on your second question, not a direct answer; I'll let someone else at RP provide that.]

Small point: I personally find it useful to make the following three-part distinction, rather than your two-part distinction:

  • Academia-like: Projects that we think would be valuable although we don't have a very explicit theory of change tied to specific (types of) decisions by specific (types of) actors; more like "This question/topic seems probably important somehow, and more clarity on it would probably somehow inform various important decisions."
    • E.g., the sort of work Nick Bostrom does
  • Think-tank-like: Projects that we think would be valuable based on pretty explicit theories of change, ideally informed by actually talking to a bunch of relevant decision-makers to get a sense of what their needs and confusions are.
  • Consultancy-like: Projects that one specific stakeholder (or I guess maybe one group of coordinated stakeholders) have explicitly requested we do (usually but not necessarily also paying the researchers to do it).

I think RP, the EA community, and the world at large should very obviously have substantial amounts of each of those three types of projects / theor... (read more)

Is there any particular reason why biosecurity isn't a major focus? As far as I can see from the list, no staff work on it, which surprises me a little. 

The short answer is that a) none of our past hires in longtermism (including management) had substantive  biosecurity experience or biosecurity interest and b) no major stakeholder has asked us to look into biosecurity issues.

The extended answer is pretty complicated. I will first go into why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity, explain why I think those reasons aren't as applicable to RP, and then why we haven't gone into biosecurity anyway.

Why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity

My personal impression is that EA/existential biosecurity experts currently believe that it's very easy for newcomers in the field to do more harm than good, especially if they do not have senior supervision from someone in the field. This is because existential biosecurity in particular is rife with information hazards, and individual unilateral actions can invoke the unilateralist's curse

Further, all the senior biosecurity people are very busy, and are not really willing to take the chance with someone new unless they a) have experience (usually academic) in adjacent field... (read more)

4
MichaelA
That all sounds basically right to me, except that my impression is that the cruxes in internal (mild) disagreements about this are just about "a) given that we're already scattered pretty thin on many projects, b) focus is often good" and not "c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway".  Or at least, I personally think I see (a) and (b) as some of the strongest arguments against us doing biosecurity stuff, while I'm roughly agnostic on (c) but I'd guess that there are some high-value things RP could do even if we lack technical backgrounds, and if some more senior biosecurity person said they really wanted us to do some project then I'd probably guess that they're right that we could be very useful on that.  (And to be clear, my bottom line would still be pretty similar to Linch's, in that if we get a person who seems a strong fit for biosecurity work, they seem especially interested in that, and some senior people in that area seem excited about us doing something in that area, I'd be very open to us doing that.)

What is your comparative advantage?

9
Linch
As much as I like to imagine it's my own work (in longtermism), I think the clearest institutional comparative advantage of RP relative to the rest of the EA movement is the quality of our animal-welfare focused research. To the best of my knowledge, if you want to focus on doing research that directly improves the welfare of many animals, and you don't have a  long-chain theory/plan of impact (e,g. by shifting norms in academia or having an influential governmental position), RP's the best place to do this. This is just my impression, but my guess is that this is broadly shared among animal-focused EAs.  The main exception I could think of is Open Phil, but they're not hiring. I also get the impression that our survey team is very good, probably the best in EA, but I have less of an inside view here than for the animal welfare research.  Our longtermism and global health work are comparatively more junior and less proven, in addition to having fairly stiff competition.
5
Peter Wildeford
Research, especially EA-aligned research done based on an explicit theory of change.
2
MichaelA
I'd also note things about scaling (as mentioned elsewhere in the AMA)
4
NunoSempere
Asked differently, why are you so cool, both at the RP level and personally?
2
Nathan Young
That's very kind of you to say Nuno.
4
NunoSempere
Surprising, I know

What have you been intentional about prioritising in the workplace culture at Rethink Priorities? If you focus on making it a great place for people to work, how do you do that? 

This is a great question! Thank you so much!

At Rethink Priorities we take an employee-focused approach. We do our best to ensure that our staff have relevant tools and resources to do their best work, while also having enough flexibility to maintain their work-life balance. Staff happiness is a high priority for us and one of our strategic goals. 

Some aspects of our employee-centered approach include:

  • Competitive benefits and perks – we offer unlimited time off, flexible work schedule, professional development opportunities, stipends etc., which are available to full- and part-time staff, as well as our fellows/interns.
  • Opportunities to socialize, make decisions, and take on new projects – for example, we have monthly social meetings, we run random polls to solicit opinions/ideas from staff, and create opportunities for employees to participate in various initiatives, like leading a workshop.
  • Biannual all staff surveys – we collect feedback from our staff twice a year. The survey asks a series of questions about leadership, management, organizational culture, benefits and compensation, psychological safety, amongst others. The results are thoroughly analyzed and guide our de
... (read more)
1
Madhav Malhotra
I really appreciate your structured response :-) Would you happen to have any documents about the actionables behind each of these? Like this handbook at Valve? :D *I ask because I'd be curious to learn about the actionable tips that others can replicate from your experience :-)

We’re working right now on a values and culture setting exercise where we are figuring out intentionally what we like about our culture and what we want to specifically keep. I appreciate Dominika's comment but I want to add a bit more of what is coming out of this (though it isn't finished yet).

Four things I think are important about our culture that I like and try to intentionally cultivate:

Work-life balance and sustainability in our work. Lots of our problems are important and very pressing and it is easy to burn yourself out working hard on it. We have deliberately tried to design our culture for sustainability. Sure, you might get some more hours of work this year if you work harder but it isn’t worth burning out just a few years later. We want our researchers here for the long haul. We’re invested in their long-term productivity.

Rigor and calibration. It’s very easy to do research poorly and unfortunately easy to do bad research that misleads people because it is hard to see how the research is bad. Thus a lot of work must be done by our researchers to ensure that our work is accurate and useful.

Ownership. In a lot of organizations, managers want their employees to do exactly... (read more)

2
Madhav Malhotra
Your work-life balance and ownership points remind me of the culture at Valve! Here are some notes I took on their culture if you'd be interested in ideas to implement. The points highlighted in orange are the actionables to implement :-) 

What kinds of research questions do you think are better answered in an organisation like RP vs. in academia, and vice versa? 

One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don't think it makes sense for RP to be trying to run large-scale ecological field studies.

Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometimes only academic research can. This also applies to much WAW research.

Personally, I think in a most other cases academia is typically not the best venue for EA research, although the latter considerations about field-building and the prestige/persuasiveness of academic research recurs sufficiently commonly that I think the question of whether a given project is worth publishing academically recurs fairly commonly even within RP.

1
Anon-biosec-account
Thanks a lot for the response - can I just ask what WAW stands for? Google is only showing me  writing about writing, which doesn't seem likely to be it... And how often does RP decide to go ahead with publishing academia? 
6
David_Moss
  "WAW" = Wild Animal Welfare (previously often referred to as "WAS" for Wild Animal Suffering). I'd say a small minority of our projects (<10%).

Are there any ways that the EA community can help RP that we might not be aware of? Or any that we do already that you would like more of?  

Commenting on our public output, particularly if you have specialized technical expertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.

One thing the EA community can help us with is by encouraging  suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we're open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we're really not credentialist (i.e. we don't care much about formal degress if people have gained the skills that we're looking for). So whenever you read a job ad and think "Oh, this friend could actually do that job!", do tell them to apply if they're interested.

More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that's important in EA jobs. I particularly think of reasoning transparency,  expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and  Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecast... (read more)

8
MichaelA
I like this answer.  Some additional possible ideas: * Letting us know about or connecting us to stakeholders who could use our work to make better decisions * E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact) * Letting us know if you have areas of expertise that are relevant to our work and you'd be willing to review draft reports and/or have conversations with us * Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc.  * Letting us know if there are projects you think it might be very valuable for us to do * We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it's plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas.  * Testing & building fit for research management * See also Collection of collections of resources relevant to (research) management, mentorship, training, etc. * Testing & building fit for ops roles * Donating (In all cases, I mean either doing this thing yourself or encouraging other people to do so.)

To any staff brave enough to answer :D 

You're fired tomorrow and replaced by someone more effective than you. What do they do that you're not doing?

I recently spent ~2 hours reflecting on RP's longtermism department's wins, mistakes, and lessons learned from our first year[1] and possible visions for 2022. I'll lightly adapt the "lessons learned for Michael specifically" part of that into a comment here, since it seems relevant to what you're trying to get at here; I guess a more effective person in my role would match my current strengths but also already be nailing all the following things. (I guess hopefully within a year I'll ~match that description myself.)

(Bear in mind that this wasn't originally written for public consumption,  skips over my "wins", etc.)

  • "Focus more
    • Concrete implications:
      • Probably leave FHI (or effectively scale down to 0-0.1 FTE) and turn down EA Infrastructure Fund guest manager extension (if offered it)
      • Say no to side things more often
      • Start fewer posts, or abandon more posts faster so I can get other ones done
      • Do 80/20 versions of stuff more often
      • Work on getting more efficient at e.g. reviewing docs
    • Reasons:
      • To more consistently finish things and to higher standards (rather than having a higher number of unfinished or lower quality things)
      • And to mitigate possible stress on my part, [personal thing], a
... (read more)
1
Madhav Malhotra
Thank you for being vulnerable enough to share this!  It sounds like you're focusing a lot on working on the right things (and by extension, fewer things)? And then becoming more efficient at the underlying skills (ex: explaining, writing, etc.)  involved?
3
MichaelA
Yeah, though I’m also aiming to work on fewer things as “a goal in itself”, not just as a byproduct of slicing off the things that are less important or less my comparative advantage. This is because more focus seems useful on order to become really excellent at a set of things, ensure I more regularly actually finish things, and reduce the inefficiencies caused by frequent task/context-switching.

Some ways someone can be more effective than me:

  • I'm not as aggressive at problem/question/cause prioritization as I could be. I can see improvements of 50-500% for someone who's (humanly) better at this than me.
  • I'm not great at day-to-day time management either. I can see ~100% improvement in that regard if somebody is very good at this.
  • I find it psychologically very hard to do real work for >30h/week, so somebody with my exact skillset but who could productively work for >40h/week without diminishing returns would be >33% more valuable.
  • I pride myself of the speed and quantity I write, but I'm slower than eg MichaelA, and I think it's very plausible that a lot of my outputs are still bottlenecked by writing speed. 10-50% effectiveness improvement seems about right.
  • I don't have perfect mental health and I'm sometimes emotional. (I do think I'm above average at both). I can see improvements of 5-25% for people who don't have these issues.
  • I'm good at math* but not stellar at it. I can imagine someone who's e.g. a Putnam Fellow be 3-25% more effective than me if they chose to work on the same problems I work on (though plausibly they'd be more effective because they'd gravitat
... (read more)
1
Madhav Malhotra
Thank you for the specific estimates and the wide variety of factors you considered :-) It may be that @MichaelA is also working primarily on improving cause prioritisation. I guess maybe you've both discussed that :D

The person who replaces me has all my same skills but in addition has many connections to policymakers, more management experience, and stronger quantitative abilities than I do.

I've adjusted imperfectly to working from home, so anyone  who has that strength in addition to my strengths would be better. I wish I knew more forecasting and modeling, too. 

7
Linch
(less helpful answer, will think of a better one later)  Hmm Rethink follows pretty reasonable management practices, and is maybe on the conservative side for things like firing unproductive employees.  So I can't really imagine being fired for ineffectiveness without warning  on a Saturday. The only way this really happens is if I'm credibly accused  of committing a pretty large crime or sexually harassing a RP colleague or maybe faking data or something like that.  To the best of my knowledge I have not done these things. Hmm since I haven't done these things, I must be set up to be falsely accused for a crime in a credible way. So the most likely way someone can replace me and be more effective on this dimension is by not making any enemies who's motivated enough to want to set them up for murder or something.
2
Linch
Quick clarifying question:  Is the most important part of your question the "fired" part or the "more effective" part? Like would you rather I a) answer by generating stories of how I might be fired and how somebody can avoid that, or b) answer what can people do to be more effective than me?
1
Madhav Malhotra
Part b) is more important. Part a) is just to make the question more real to the person answering.

Are there any skills and/or content expertise that you expect to particularly want from future hires? Put differently, is there anything that you think aspiring hires might want to start working on to be better suited to join/support RP over the next few years?

5
Linch
I'll let my colleagues answer the object-level question/might answer it myself later if I get better ideas later, but broadly I would somewhat caution against having a multi-year plan to be employed at Rethink Priorities specifically (or at any specific organization). RP hiring is pretty competitive now and has gotten more competitive over time[1], and also our hiring processes are far from perfect so even very good researchers (by our lights) may well be missed by our hiring process.  That said, some of the answers to James Ozden's question might be relevant here as well.  [1] We're also scaling pretty quickly to hire more people, but EA community building/recruitment at top universities have also really scaled up since 2020, and it's unclear how these things shake out in terms of how competitive our applications will be in a few years.

I agree, but would want to clarify that many people should still apply and very many people should at least consider applying. It's just that people shouldn't optimise very strongly for getting hired by one specific institution that's smaller than, say, "the US government" (which, for now, we are 😭).

5
Linch
Thanks for the clarification! Definitely encourage people to apply. We've also moved paid work trials to earlier and earlier on in the process, so hopefully applying is not a financial hardship for people.

What percentage of your work/funding comes from non-EA aligned sources? 

I once told people in a programmer group chat what I was doing when I got my new job at RP. One of them looked into the website and gave like a $10 donation. 

To the best of my limited knowledge, this might well be our largest non-EA aligned donation in longtermism. 

It's a little hard to say because we don't necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It's varied by year (we've received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).

One tricky part of separating this out - there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.

Minor nit:

You can see all of our work to date here.

should be 

You can see all of our completed public work to date here.

As discussed in this comment thread (by you :P), an increasingly high percentage of our work is targeted towards specific decision-makers, and whether we choose to publish is due to a combination of researcher interest, decision-maker priorities, and the object-level of what the research entails.

I'm particularly glad you  note this since the survey team's research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.

4
JamesÖz
If you can share, what are some other projects or research that the survey team works on? If you can't give specifics, it would be useful to know broadly what they were related to.  I'm intrigued by the mystery!

Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks. 

Most of the projects aren't exactly top secret, but I err on the side of not  mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them to estimate how many US college students have heard of EA.

An illustrative example of the kind of project a lot of these are would be an org approaching us saying they are considering doing some outreach (this could be for any cause area) and wanting us to run a study (or studies) to assess what kind of message would be most appropriate. Another common type of project is just polling support for different policies of interest and  testing the robustness of these results with different approaches. Both these kinds of projects are the most common but generally take up proportionately less time.

There are definitely a lot of other ... (read more)

4
Peter Wildeford
Thanks! We'll make sure to get this changed going forward.

In your past experiences, what are the biggest barriers to getting your research in front of governmental  organisations? (ex: official development aid grantmakers or policy-makers)

Biggest barriers in getting them to act on it?

I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.

In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.

As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliam... (read more)

3
Richenda
@Neil_Dullaghan we should chat.
1
Madhav Malhotra
Thank you for the well-researched response :-) Excited to maybe ask again in a year and see any changes in your practical lessons!

In your yearly review you mention that Rethink may significantly expand its Longtermism research group in the future, including potentially into new focus areas and topics. Do you have any ideas of what these might be (beyond the mentioned AI governance), and how you might choose (i.e. looking for a niche where Rethink can play a major role, following demand of stakeholders, etc.)?

If in 5 and/or 10 years time you look back on RP and feel its been a major success, what would that look like? What kind(s) of impact would you consider important, and by what bar would you measure your attainment/progress towards that?

5
Peter Wildeford
The first part I answered here. I think a major success for us would look like having achieved a large and sustainably productive research organization tackling research in a variety of disciplines and cause areas. I think we will have made a major contribution to unlocking funding in effective altruism by figuring out to fund with more confidence as well as increasing our influence across a larger variety of stakeholders, including important stakeholders outside of the effective altruism movement..

How have you or would you like to experiment with your organisational structure or internal decision making to improve your outputs?

7
Peter Wildeford
One recent experiment has been trying to get better at project management, especially at a larger scale. We’ve rolled out Asana for the entire organization and have hired a project manager. Another recent experiment has been whether we can directly hire for “Senior Research Managers” (SRMs), instead of having to develop all our senior research talent in-house. We’ve hired two external SRMs and it has been going well so far, but it is too early to tell. We may try to hire another external SRM in our current hiring process. If both these two experiments go well, it will unlock a lot of future scalability for our organization and for other organizations that can follow suit. Our next experiment will likely involve hiring research and/or executive assistants to see if they can help our existing researchers achieve more productivity in a more sustainable way.

Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?

It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.

I'd also refer people to Michael and Saulius' replies to arushigupta's similar subquestion in last year's RP AMA.

8
MichaelA
One thing I'd add is that I think several people at RP and elsewhere would be very excited if someone could: 1. Find existing resources that work as good training for improving one's reasoning transparency, and/or 2. Create such a resource As far as I'm aware, currently the state of the art is "Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don't really (since it's hard!), hope they absorb that, repeat." I.e., the state of the art is kinda crappy. (I think Luke's post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.)  I don't know exactly what sort of resources would be best, but I imagine we could do better than what we have now. 
7
MichaelA
Oh, and some other resources I'd often point people towards after they join are: * Giving and receiving feedback (including the top comments) * Countering imposter syndrome and anxiety about work * My collections on how to do high-impact research and get useful input from busy people

For longtermist work, I often point people to Holden Karnofsky's impressions on career choice, particularly the section on building aptitudes for conceptual and empirical research on core longtermist topics .

I've also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don't need to develop an internet argument habit to be a good researcher.

Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.

Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track "what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone's decisions... (read more)

Let's say your research directly determined the allocation of $X of funding in 2021. 

Let's say you have to grow that amount by 10 times in 2022, but keep the same number of staff, funding, and other resources.  

What would you change first in your current campaigns, internal operations, etc.?

7
Peter Wildeford
I don’t think it is actually possible to 10x our impact with the same staff, funding, and other resources - hence our desire to hire and fundraise more. If it was possible, we’d certainly try to do that! The best answer I can think of is Goodharting - we certainly could influence more total dollars if we cared less about the quality of our influence and the quality of those dollars. We also could exaggerate our claims about what “influence” means, taking credit for decisions that likely would’ve been made the same anyway.

What are the bottlenecks to using forecasting better in your research?

Lazy semi-tangential reply: I recently gave a presentation that was partly about how I've used forecasting in my nuclear risk research and how I think forecasting could be better used in research. Here are the slides and here's the video. Slides 12-15 / minutes 20-30 are most relevant. 

I also plan to, in ~1 or 2 months, write and publish a post with meta-level takeaways from the sprawling series of projects I ended up doing in collaboration with Metaculus, which will have further thoughts relevant to your question.

(Also keen to see answers from other people at RP.)

7
Peter Wildeford
We at Rethink Priorities definitely have made an increasingly large effort to include forecasting in our work. In particular, we just recently have been running a large Nuclear Risks Tournament on Metaculus. My guess is that the reasons we don’t have even more forecasting relates to not all of our researchers being experienced forecasters and it hasn’t been a sufficient priority to generate sufficiently useful and decision-relevant forecasting questions for every research piece.
[anonymous]9
0
0

Will you have some kind of internship/fellowship oppurtunities next summer?

7
Peter Wildeford
We have not yet decided whether we will have internships / fellowships this summer - assuming you are referring to the Northern Hemisphere here. If we launch these internships, I imagine they will open in 2022 March. We are continuing to consider launching internships / fellowships for summer in each Hemisphere (as we launched an AI Governance and Fellowship for 2022 Jan-March for summer in the Southern Hemisphere). Another thing we are considering in addition to, or in replacement of, internships this year is Research/Executive Assistant positions that focus more on supporting and learning the work of a particular researcher on the RP team. These roles would likely be permanent/indefinite in length rather than a few months like our internships have been.
5
OscarD🔸
I am also interested in future internship plans.  Specifically, how flexible are the dates and time commitments? As someone based in Australia, seasonal descriptors (presumably from the Northern hemisphere) aren't ideal though I can convert them - specific months would be preferable :)  Also our university holiday periods are different, so I will need to work around that too.

What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?

8
Holly_Elmore
Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee,  and when we were able to but didn't, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don't see EA exploiting it enough. 
8
MichaelA
As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren't written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links.  (I'm not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.) People might also find my central directory for open research questions useful, but that's not filtered for my own beliefs about how important-on-the-margin these questions are.

Interesting that you've got climate change in your global health and development work rather than with longtermism. What are the research plans for the climate change work at RP?

9
Peter Wildeford
A note on why climate change is currently in our global health and development work rather than longtermism - the main reasons for this is that while we could consider longtermist work on climate change we do not think marginal longtermist climate change work makes sense for us relative to the importance and tractability of other longtermist work we could do. However, global health and development funders and actors are also interested in climate change in a way that does not funge much against longtermist money or talent, and the burden of climate change is felt heavily on lower and middle income countries. Therefore we think climate change work makes sense to explore relative to other global health and development opportunities.
9
Jason Schukraft
Hi James, thanks for your question. The climate change work currently on our research calendar includes: 1. A look at how climate damages are accounted for in various integrated assessment models 2. A cost effectiveness analysis of anti-deforestation interventions 3. A review of the landscape of climate change philanthropy 4. An analysis of how scalable different carbon offsetting programs are

I'm interested in your current and future work on longtermism. 

One of your plans for 2022 is to:

  • Build a larger longtermist research team to explore longtermist work and interventions more broadly

Have you decided the possible additional research directions you are hoping to explore? When you're figuring this out, are you more interested in spotting gaps or do you feel the field in young enough that investigating areas others are working on/have touched is still likely to be beneficial? Perhaps both!

7
Peter Wildeford
One thing we know for certain is that we are definitely doing AI Governance and Strategy work. We have not decided these other avenues yet - I think we will decide them in large part based on who we hire for our roles and in consulting with the people we hire once they are hired and come to agreements as a team. I definitely think that there is a lot to contribute in every field, but we will weigh neglectedness and our comparative advantage in figuring out what to work on.
4
MichaelA
I expect we'll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it'd be most useful to do and the pros and cons of various avenues we might pursue.  (We sort-of passively do this in an ongoing way, and I've been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we'd probably ramp it up when choosing directions for next year. I'm saying "I think" because the longtermism department haven't yet done our major end-of-year reflection and next-year planning.)

What should one do now if one wants to be hired by Rethink Priorities in the next couple years? Especially in entry-level or more junior roles.

I realize this is a general question; you can answer in general terms, or specify per role.

2
MichaelA
James Ozden's question above might be sufficiently similar to yours that the answers there address your question?

From a talk at EAG in 2019, I remembered that your approach could be summarized as empirical research in neglected areas (please correct me if I'm wrong here). Is this still the case? Do you still have a focus on empirical research (Over, say, philosophy)?

7
Peter Wildeford
Yes, it is still our approach, broadly speaking, to focus on empirical research, though certainly not to the exclusion of philosophy research. And we’ve now done a lot of research that combines both, such as our published work on invertebrate sentience and our forthcoming work on the relative moral weight of different animals.

About funding overhang:

Peter wrote a comment on a recent post:

I'm optimistic we will unlock new sources of needed funding (Rethink Priorities is working a ton on this) so we should expect the current funding overhang to be temporary, thus making it important to still have future donors ready / have large amounts of money saved up ready to deploy.

You also wrote in your plans for 2022:

Help solve the funding overhang in EA and unlock tons of impact by identifying interventions across cause areas that can take lots of money while still meeting a high bar

... (read more)

We'd expect to find new funding opportunities in each cause area we work in. Our work is aspirational and inherently about exploring the unknown though, so it's very difficult to know in advance how large the funding gaps we uncover will be. But hopefully our work will contribute to a part of work that overall shifts EA from not having a funding overhang but instead having substantial room for more funding in all cause areas. This will be a multi-year journey.

Sorry if the answer for this is readily available elsewhere, but are there recommended times of the year to donate if you are based in the UK, e.g. to make use of matching opportunities? My understanding is that the Giving Tuesday facebook matching is only for US donors.

Thanks!

3
Janique
Thanks for considering to support us!  Basically anyone can donate to the Giving Tuesday fundraiser and participate, but only for US donors it's tax-deductible. From the EA Giving Tuesday FAQ: >Donors from a large number of countries are eligible to donate through Facebook and get matched. However, in both 2019 and 2020 most non-U.S. donors faced significantly lower donation limits. We expect the same to be true in 2021. [This year, the donation limit for US donors is USD 20,000.] Additionally, please be aware that donors outside the United States will likely lose out on any tax benefits they’d receive from donating to a nonprofit registered in their own country. International donors can give to RP through the EA Funds. As a UK donor, your gift is eligible for Gift Aid and would typically be tax-deductible. We explain all of this on our donation page: https://rethinkpriorities.org/donate Regarding other matching opportunities: Check out https://www.every.org/rethink-priorities. They still seems to have some funds available from their FallGivingChallenge for a 100% match! We don't regularly run matching campaigns ourselves, but it's not excluded we may  set up one in the course of the next year.  The best way to stay informed about upcoming opportunities is our newsletter.  Your gift is welcome at any time of the year!
Curated and popular this week
Relevant opportunities