Hi, all.

We're the staff at Rethink Priorities and we would like you to Ask Us Anything! We'll be answering all questions starting Tuesday, 15 December.

About the Org

Rethink Priorities is an EA research organization focused on influencing funders and key decision-makers to improve decisions within EA and EA-aligned organizations. You might know of our work on quantifying the amount of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all our work to date here and some of our ongoing projects here.

Over the next few years we plan to expand our work in animal welfare, relaunch our work in longtermism, and continue our work in movement building, and much more.

About the Team


Marcus A. Davis - Co-Executive Director

Marcus is a co-founder and co-Executive Director at Rethink Priorities, where he leads research and strategy. He's also a co-founder of Charity Entrepreneurship and Charity Science Health, where he previously systematically analyzed global poverty interventions, helped manage partnerships, and implemented the technical aspects of the project.

Peter Hurford - Co-Executive Director

Peter is the other co-founder and co-Executive Director of Rethink Priorities. Prior to running Rethink Priorities, he was a data scientist in industry for five years at DataRobot, Avant, Clearcover, and other companies. He also has a Triple Master Rank on Kaggle (an international data science competition) and have achieved top 1% performance in five different Kaggle competitions. He was a previous long-time board member at Animal Charity Evaluators and he continues to serve on the board at Charity Science.


David Moss - Principal Research Manager

David Moss is the Principal Research Manager at Rethink Priorities. He previously worked for Charity Science and has worked on the EA Survey for several years. David studied Philosophy at Cambridge and is an academic researcher of moral psychology.

Kim Cuddington - Distinguished Researcher

Kim Cuddington is a Distinguished Researcher at Rethink Priorities and is an Associate Professor at the University of Waterloo. She has a PhD in Zoology, a Masters in Biology, and a Masters in Philosophy. She also has a background in ecology and mathematical modeling.

David Reinstein - Distinguished Researcher

Senior lecturer in economics at the University of Exeter. His research has covered a number of topics including charitable giving and social influences on giving. He originally received his PhD at the University of California, Berkeley under Emmanuel Saez.

Jason Schukraft - Senior Research Manager

Jason is a Senior Research Manager at Rethink Priorities. Before joining the RP team, Jason earned his doctorate in philosophy from the University of Texas at Austin. Jason specializes in questions at the intersection of epistemology and applied ethics.

David Rhys Bernard - Senior Staff Researcher

David is a PhD candidate at the Paris School of Economics and has a Masters in Public Policy and Development. He has a background in causal inference and econometrics and has previously worked at Giving What We Can and the United Nations Development Programme.

Saulius Šimčikas - Senior Staff Researcher

Saulius is a Senior Staff Researcher at Rethink Priorities. Previously, he was a research intern at Animal Charity Evaluators, organized Effective Altruism events in the UK and Lithuania, and worked as a programmer.

Neil Dullaghan - Staff Researcher

Neil is a Staff Researcher at Rethink Priorities. He also volunteers for Charity Entrepreneurship and Animal Charity Evaluators. Before joining RP, Neil worked as a data manager for an online voter platform and has an academic background in Political Science.

Holly Elmore - Staff Researcher

Holly Elmore is a Staff Researcher at Rethink Priorities and has a background in evolutionary biology and ecology. Before working at RP, she earned a PhD from Harvard University in the department of Organismic and Evolutionary Biology. While at Harvard, she organized Harvard University Effective Altruism Student Group, serving as president for two years.

Derek Foster - Staff Researcher

Derek is a Staff Researcher at Rethink Priorities. He studied philosophy and politics as an undergraduate, followed by public health and health economics at master’s level. Before joining RP, Derek worked on the Global Happiness Policy Report and various other projects related to global health, education, and subjective well-being.

Daniela R. Waldhorn - Staff Researcher

Daniela is a Staff Researcher at Rethink Priorities. She is a PhD candidate in Social Psychology, and has a background in management and operations.

Before joining RP, Daniela worked for Animal Ethics and for Animal Equality.

Linchuan Zhang - Staff Researcher

Linchuan (Linch) Zhang is a Staff Researcher at Rethink Priorities working on forecasting and longtermist research. Before joining RP, he did forecasting projects around Covid-19, including with superforecasters and University of Oxford researchers. Previously, he programmed for Impossible Foods and Google, and has led several EA local groups.

Michael Aird - Associate Researcher

Michael Aird is a Associate Researcher at Rethink Priorities. He has a background in political and cognitive psychology and in teaching. Before joining RP, he conducted longtermist macrostrategy research for Convergence Analysis and the Center on Long-Term Risk.


Abraham Rowe - Director of Operations

Abraham is the Director of Operations at Rethink Priorities. He previously co-founded and served as the Executive Director of Wild Animal Initiative, and served as the Corporate Campaigns Manager at Mercy For Animals.

Janique Behman - Director of Development

Janique is the Director of Development at Rethink Priorities. She cultivates relationships with major donors and institutional grantmakers and helps us find funders for our new research projects. She previously was in charge of strategy and community-building at Effective Altruism Zurich and interned at EA Geneva. She holds an MBA with a focus on philanthropy advisory services.

Ask Us Anything

Please ask us anything - about the org and how we operate, about the staff, about our research… anything!

You can read more about us in our 2020 Impact and 2021 Strategy EA Forum update or visit our website rethinkpriorities.org.

If you're interested in hearing more, please consider subscribing to our newsletter.

Also, we'd be remiss if we didn't mention that we're currently fundraising! We are funding constrained and have the management capacity and hiring talent pool to grow if given more money. We accept and track restricted funds by cause area if that is of interest.

If you'd like to support our work, you can find donation instructions at https://www.rethinkpriorities.org/donate or you can email Janique at janique@rethinkpriorities.org.


New Comment
137 comments, sorted by Click to highlight new comments since: Today at 2:50 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

How funding-constrained is your longtermist work, i.e., how much funding have you raised for your 2021 longtermist budget so far, and how much do you expect to be able to deploy usefully, and how much are you short?

Hi Jonas,

Since we last posted our longtermism budget, we've raised ~$89,500 restricted to longtermism for 2021 (with the largest being the grant recommendation from the Survival and Flourishing Fund). This means we will enter 2021 with ~$121K restricted to longtermism not yet spent. Overall, we'd like to raise an additional $403K-$414K for longtermist work by early 2021.

For full transparency - note that, if necessary, we may also choose to use unrestricted funds on longtermism and that this is not factored into these numbers. We currently have ~$273K in unrestricted funds, though we will likely have non-longtermism things we will need to spend this money on.

Given that we are currently just raising money to cover the salaries of our existing longtermist staff (including operations support) as well as start an longtermism intern program, we expect we will be able to deploy longtermist money quickly. We also have a large talent pool of longtermist researchers we likely could hire this year if we ended up with even more longtermism money.

I did internal modeling/forecasting for our fundraising figures, and at least on the first pass it looked like our longtermist work was more likely to be funding constrained than our other priority cause areas, at least if "funding constrained" is narrowly defined as "what's the probability that we do not raise all the money that we'd like for all planned operations to run smoothly."

My main reasoning was somewhat outside-viewy and focused on general uncertainty: our longtermist team is new, and relative to other of Rethink's cause areas, less well-established with less of a track record of either a) prior funding, b) public work other than Luisa's nuclear risk work, or c) a well-vetted research plan. So I'm just generally unsure of these things.

Three major caveats:

1. I did those forecasts in late October and now I think my original figures were too pessimistic. 

2. Another caveat is that my predictions were more a reflection of my own uncertainty than a lack of inside view confidence in the team. For context, my 5th-95th percentile credible interval spanned ~an order of magnitude across all cause areas.

3. When making the original numbers, I incorporated but plausibly substantially underrated the degree that  the forecasts will change and not just reflect reality. For example, Peter and Marcus may have prioritized different decisions accordingly due to my numbers, or this comment may affect other people's decisions.

Huge fan of the work your team has done, so thank you all for everything!  A couple questions :)

1. For potential donors who are particularly interested in wild animal welfare research, how would you describe any key differentiating factors between the approaches of Rethink Priorities and  Wild Animal Initiative

2. For donors who might want to earmark donations to go specifically towards wild animal welfare research within your organization, would this in turn affect the allocation of priority-agnostic donations otherwise made to Rethink? Or is there a way in which such earmarked donations indeed counterfactually support this specific area as opposed to the general areas you cover? (This question applies to most multi-focused orgs.)

3. With respect to invertebrate research, and specifically 'invertebrate sentience', it seems that the sheer number of invertebrates existing would be the driving factor in calculating any expected benefit of pursuing interventions. Are there 'sentience probabilities' low enough to put such an expected value of intervention in question? (I have not thoroughly looked through your publicly available work, so feel free to point to relevant resources if this question has been addressed!)

Thanks in advance for all your thoughts!

Hi Dan,

Thanks for your questions. I'll let Marcus and Peter answer the first two, but I feel qualified to answer the third.

Certainly, the large number of invertebrate animals is an important factor in why we think invertebrate welfare is an area that deserves attention. But I would advise against relying too heavily on numbers alone when assessing the value of promoting invertebrate welfare. There are at least two important considerations worth bearing in mind:

(1) First, among sentient animals, there may be significant differences in capacity for welfare or moral status. If these differences are large enough, they might matter more than the differences in the numbers of different types of animals.

(2) Second, at some point, Pascal's Mugging will rear its ugly head. There may be some point below which we are rationally required to ignore probabilities. It's not clear to me where that point lies. (And it's also not clear that this is the best way to address Pascal's Mugging.) There are about 440 quintillion nematodes alive at any given time, which sounds like a pretty good reason to work on nematode welfare, even if one's credence in their sentience is really low. But nematodes are n... (read more)

Thanks for the questions!

On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how "weird" the public finds WAW interventions).

We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It's hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.

On (2), it's hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If ... (read more)

I’ve been very impressed with your work, and I’m looking forward to you hopefully making similarly impressive contributions to probing longtermism!

But when it comes to questions: You did say “anything,” so may I ask some questions about productivity when it comes to research in particular? Please pick and choose from these to answer any that seem interesting to you.

  1. Thinking vs. reading. If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. Would you agree or do you have a different approach?
  2. Self-consciousness. I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly in
... (read more)

I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!

I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.

Hi Denis,

Lots of really good questions here. I’ll do my best to answer.

  1. Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.

  2. Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.

  3. Is there something interesting here?: Ye

... (read more)
5Dawn Drescher2y
Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io [http://Reciprocity.io] for talking about research, please? ^^) Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english [https://10fastfingers.com/typing-test/english]. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation. Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture w
(Sorry for barging in on this thread :D) Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post [https://forum.effectivealtruism.org/posts/N3zd4FtGmRnMF7pfM/asking-for-advice] useful (if you haven't already seen it). I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote: I commented [https://forum.effectivealtruism.org/posts/N3zd4FtGmRnMF7pfM/asking-for-advice?commentId=r4kaDJ63MtQcFEkgK] that I'd slightly push back on that passage, saying:
3Dawn Drescher2y
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?
1[comment deleted]2y
  1. Personally, I’m very self-conscious about my work and tend to wait to long too share it. But the culture of RP seems to fight that tendency— which I think is very productive!
7Dawn Drescher2y
Thanks! This is something I sometimes struggle with I think. Is the culture just all about sharing early and often and helping each other, or are there also other aspects to the culture that I may not anticipate that help you overcome this self-consciousness? :-)

1. Thinking vs. reading. 

Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.

This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!

3. Is there something interesting here?

I mostly try to work out how excited I am by this idea and whether I could see myself still being excited in 6 months, since for me having internal motivation to w... (read more)

3Dawn Drescher2y
Thank you! Using the thinking vs. reading balance as a feedback mechanism is an interesting take, and in my experience it’s also most fruitful in philosophy, though I can’t compare with those branches of economics. Survival mindset: I suppose it serves its purpose when you’re in a very low-trust environment, but it’s probably not necessary most of the time for most aspiring EA researchers. Thanks for linking that list of textbooks! It’s also been helpful for me in the past. :-D Planning the next day the evening before also seems like a good thing to try for me. Thanks! I wonder whether you all have such fairly high typing speeds simply because you all type a lot or whether 80+ WPM is a speed threshold that is necessary to achieve before one ceases to perceive typing speed as a limiting factor. (Mine is around 60 WPM.) I hope you can get your work hours down to a manageable level!
It was interesting to read, thanks for the answers :) A small remark, which may be of use as you said you used Anki and now using Roam - The Roam Toolkit [https://chrome.google.com/webstore/detail/roam-toolkit/ebckolanhdjilblnkcgcgifaikppnhba] add-on allows you to use spaced-repetition in Roam.

#9 Typing speed: I think my own belief is that typing speed is probably less important than you appear to believe, but I care enough about it that I logged 53 minutes of typing practice on keybr this year (usually during moments where I'm otherwise not productive and just want to get "in flow" doing something repetitive), and I suspect I still can productively use another 3-5 hours of typing practice next year even if it trades off against deep work time (and presumably many more hours than that if it does not). 

#10 Obvious questions. I suspect that while sometimes ignoring/not noticing "obvious questions/advice" etc is coincidental unforced errors, more often than not there is some form of motivated reasoning going on behind the scenes (eg because this story will invalidate a hypothesis I'm wedded to, because it involves unpleasant tradeoffs, because some beliefs are lower prestige, because it makes the work I do seem less important, etc). I think training myself carefully to notice these things has been helpful, though I suspect I still miss a lot of obvious stuff. 

#11 Tiredness, focus, etc..I haven't figured this out yet and am keen to learn from my coworkers and others! Right now I take a lot of caffeine and I suspect if I were more careful about optimization I should be cycling drugs over a weekly basis rather than taking the same one every day (especially a drug like caffeine that has tolerance and withdrawal symptoms). 

2Dawn Drescher2y
Typing speed: Interesting! What is your typing speed? Obvious questions: Thanks, I’ll keep that in mind. It seems unlikely to be the case for me, but I haven’t tried to observe such a connection either. I observed the opposite tendency in me in the sense that I’m worried about being wrong and so probe all the ways in which I may be wrong a lot, which has had the unintended negative effect that I’m too likely to abandon old approaches in favor of ones I’ve heard of very freshly because for the latter I haven’t come up with as many counterarguments. I also find rehearsing stuff that I already believe to be yucky and boring in ways that rehearsing counterarguments is not. But of course I might be falling for both traps in different contexts.
Only 57.9 according to keybr. I suspect a) typing practice will be less helpful for me if my typing speed is higher (like David's) and b) my current typing speed is below average for programmers (not sure about researchers). (It's probably relevant/bad that my default typing system on those typing test layouts (26 characters + space only uses about 5 fingers. I think I go up to 8 on a more (normal) paragraph like this one that also uses shift/return/slash/number pad. I think if I'm focused on systematic rather than incremental changes to my typing speed I'd try to figure out how to force myself to use all 10 fingers). Obvious questions Hmm I think a lot of people have motivated reasoning of the form I describe, but I don't know you well enough and I definitely don't think all people are like this. There is certainly a danger as well of being too contrarian or self-critical. Have you tried calibration practice? Maybe also make an explicit effort to write down key beliefs and numerical probabilities (or even just words for felt senses) to record and eventually correct for overupdating on new arguments/evidence (if this is indeed your issue).
2Dawn Drescher2y
Do you use the guided lessons of Keybr or a custom text? I think the guided lessons are geared toward your weaknesses, which probably leads to a lower speed than what you’d achieve with the average text. That’s something where I’ve never felt bottlenecked by my typing speed. Learning to type blindly was very useful, though, because it gave me a lot more freedom with screen configurations. (And switching to a keyboard layout other than German, where most brackets are super hard to reach. I use a customized Colemak.) Yeah, it’s on my list of things I want to practice more, but the few times I did some tests I was mostly well-calibrated already (with the exception of one probability level or what they’re called). There’s surely room for improvement, though. Maybe I’ll do worse if the questions are from an area that I think I know something about. ^^ Maybe I’m also too impressionable by people who speak with an air of confidence. I might be falling for some sort of typical mind fallacy and assume that when someone doesn’t use a lot of hedges, they must be so sure that they’re almost certain to be right, and then update strongly on that. But I’m not quite convinced by that theory either. That probably happens sometimes, but at other times I also overupdate on my own new ideas. I’m pretty sure I overupdate whenever people use guilt-inducing language, though. I filled in Brian Tomasik’s list of beliefs and values on big questions [https://reducing-suffering.org/summary-beliefs-values-big-questions/] at one point. :-D
Hi Denis, Thanks again for these questions. I'll share my answers in a few comments. This context and disclaimer [https://forum.effectivealtruism.org/posts/WJ3rBBkawoTJcY642/ask-rethink-priorities-anything-ama?commentId=uQx7HNJdx76x4aRPR] - including that I only started with Rethink a month ago - should be borne in mind. 1. Thinking vs reading I don't think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should. I'm somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas. I feel like EA might have a bit too much a tendency towards "think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it". It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first. Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I'm tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc. (On this general topic, I liked the post The Neglected Virtue of Scholarship [https://www.lesswrong.com/posts/64FdKLwmea8MCLWkE/the-neglected-virtue-of-scholarship] .)
Less important personal ramble: I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that. But then I've repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it's such an easily checkable thing!) And I've also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output. So maybe that feeling that I'm spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I'd (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking "Is this how I'd treat a friend?" in response to negative self-talk [source with related ideas [https://self-compassion.org/exercise-1-treat-friend/]].)
10. “Obvious questions” (Just my personal, current, non-expert [https://forum.effectivealtruism.org/posts/WJ3rBBkawoTJcY642/ask-rethink-priorities-anything-ama?commentId=uQx7HNJdx76x4aRPR] thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.) A summary of my recommendations in this vicinity: 1. If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions [https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions] , and/or an overlapping 80,000 Hours post [https://80000hours.org/articles/research-questions-by-discipline/]. 2. If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out. * One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven't stated/emphasised. * One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things. * I think this is a big part of what I’ve done this year. * Here’s [https://docs.google.com/document/d/10uudOQx19NCLrnrySfozRbOIDVpEQxeI-d8zpZi1qhw/edit] one example of a piece of my own work which came from roughly that sort of process. I’ll add
I don't work at Rethink Priorities but I couldn't resist jumping in with some thoughts as I've been doing a lot of thinking on some of these questions recently Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on. Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset of, ‘I’ll be pretty bad at this for quite a while’. Then when I made mistakes I could just think, ‘right, as expected. Now let’s figure out how to not do that again’. Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws). Optimal hours of work per day. I tend to work about 4-7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4-5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime [https://blog.rescuetime.com/work-life-balance-study-2019/] says just ~3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it's hard to shake the feeling. Learning a new field. I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers pr
3Dawn Drescher2y
Whee! Thank you too! Yeah, I think that perspective on self-consciousness is helpful! Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference. “Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful! I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga [https://www.vaga.org/health/performance-lab-whole-food-multi-review/] and are a bit cheaper than the Athletic Greens. “Default mode network”: Interesting! I didn’t know about that.
Hi Denis, thanks for these questions. I'll give my answers to a bunch of them tomorrow. Just jumping in early with a clarifying question: Could you explain what you mean by "Survival vs. exploratory mindset", and/or provide a link that explains that distinction? I haven't heard those terms before, and Google didn't immediately show me anything that looked relevant. (Is it perhaps related to exploring vs exploiting?)
5Dawn Drescher2y
Hi Michael! Huh, true, those terms seem to be vastly less commonly used than I had thought. By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc. By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc. Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.
This distinction reminds me of the "survival values vs self-expression values" dimension of the World Values Survey. I'm a bit rusty on those terms, but from skimming a Wikipedia page [https://en.wikipedia.org/wiki/Inglehart%E2%80%93Welzel_cultural_map_of_the_world] , I think the "survival" part lines up decently with what you describe as "survival mindset", but the self-expression part might not line up well with "exploratory mindset": As for your question: I haven't thought in terms of survival vs exploratory mindset before, so I don't think I have a strong view on which is more useful for research (or the situations in which this differs), how often I adopt each mindset, or how I cultivate them. I guess I'd probably guess exploratory mindset tends to be more useful and tends to be what I have, but I'm not sure. I think parts of Rationality: From AI to Zombies (aka "the sequences") and Harry Potter and the Methods of Rationality have quite useful advice - and a way of making it stick psychologically - that feels somewhat relevant here. E.g., the repeated emphasis and elaboration on "that which can be destroyed by the truth should be". I have a sense that someone who's struggling to adopt useful facets of the exploratory might benefit from reading (or re-skimming) one or both of those things.
2Dawn Drescher2y
Yeah, I agree about how well or not well those concepts line up. But I think insofar as I still struggle with probably disproportionate survival mindset, it’s about questions of being accepted socially and surviving financially rather than anything linked to beliefs (maybe indirectly in a few edge cases, but that feels almost irrelevant). If this is not just my problem, it could mean that a universal basic income could unlock more genius researchers. :-)
11. Tiredness, focus, etc. I find that being tired makes my mind wander a lot when reading longform things (e.g., papers, posts, not things like Slack messages or emails), so when I'm tired I usually try to do things other than reading. If I'm just a bit or moderately tired, I usually find I'm still about as able to write as normal. If I'm very tired, I'll still often be able to write quickly, but then when I later read what I wrote I'll feel that it was unclear, poorly structured, and more typo-strewn than usual. So when very tired, I try to avoid writing longform things (e.g., actual research outputs). Things I find I'm still pretty able to do when tired include commenting on documents people want input on (I think I'm more able to focus on this than on regular reading because it's more "interactive" or something), writing things like EA Forum comments, replying to emails and Slack messages and the like, doing miscellaneous admin-y tasks, and reflecting on the last week/month and planning the next. So I often do a disproportionate amount of such tasks during evenings or during days when I'm more tired than normal, and at other times do a disproportionate amount of reading and "substantive" writing. Also, I'm fortunate enough to have flexible hours. So sometimes I just work less on days when I'm tired (perhaps spending more time with my wife), and then make up for it on other days.
2 and 3. Self-consciousness and Is there something interesting here? These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers. I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info. Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself. See also When to focus and when to re-evaluate [https://forum.effectivealtruism.org/posts/e2heLEnbeqTHaJqBf/when-to-focus-and-when-to-re-evaluate] . Things that help me with this include, and/or some scattered related thoughts, include: * Talking to others and getting feedback, including on early-stage ideas * I liked David and Jason’s remarks on this in their comments * A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me - something like: * First getting verbal feedback from a couple people on a messy, verbal description of an idea * Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback * Then polishing and
Examples to somewhat illustrate the last two points: This year, in some so-far-unpublished work, I wrote about some ideas that: * I initially wasn’t confident about the importance of * Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn't important or (b) has been in essence discussed in some other form or place that I just am not familiar with. So when I had the initial forms of these ideas and wasn't sure how much time (if any) to spend on them, I took roughly the following approach: I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc. In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This - combined with my independent impression that these ideas might be somewhat important and novel - seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa). So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more. Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to: * Explain to others why they shouldn’t bother exploring the same thi
4Dawn Drescher2y
Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post [https://impartial-priorities.org/the-bulk-of-the-impact-iceberg.html] a while back. :-D It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA). To sort-of restate your points: * I think it's common for people to not publish explorations that turned out to seem to "not reveal anything important" (except of course that this direction of exploration might be worth skipping). * Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate. * I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency [https://www.openphilanthropy.org/reasoning-transparency], which could make people rule this out too much/too early. * Again, there can be valid reasons for this (if you're sufficiently confident that it's worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
4Dawn Drescher2y
Part of this reminds me a lot of CFAR’s approach [https://www.rationality.org/resources/updates/2015/qa-isnt-self-deception-sometimes-productive] here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both): Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great! I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that "Worker-me versus CEO-me" framing, and hadn't heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment. I share the view that it'll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate! I imagine it'd be hard (though not impossible) to generate advice on this that's quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
Regarding your Typing speed question, Tom Chivers (a journalist) was asked in a recent EA Forum AMA "How one should go about learning how to write high-quality material? And what is the way to get it published?" [https://forum.effectivealtruism.org/posts/Q6LY8Tmw6BFuat2BX/ama-tom-chivers-science-writer-science-editor-at-unherd?commentId=8HZreC8BgQ28F2cCq] His reply:
2Dawn Drescher2y
Heh, great find! :-D
7. Hard problems I’m not actually sure if the precise problem you’re describing resonates with me. I definitely often feel very uncertain about: * whether the goal I’m striving towards really matters at all * even if so, whether it’s a goal worth prioritising * whether I should prioritise it (is it my comparative advantage?) * whether anything I produce in pursuing this goal will be of any use to anyone But I’m not sure there have been cases where, for a week or more, I didn’t feel like I was at least progressing towards: * having the sort of output I had planned or now planned to produce (setting aside the question of whether that output will be useful to anyone), and/or * deciding (for good reason) to not bother trying to create that sort of output Note that I’d count as “progress” cases where I explored some solutions/options that I thought might work/be useful for X, and all turned out to be miserable wastes of time, so I can at least rule those out and try something else next week. I'd also count cases where I learned other potentially useful things in the process of pursuing dead ends, and that knowledge seems likely to somehow benefit this or other projects. It is often the case that my estimate of how many remaining days something will take is longer at the end of the week than it was at the beginning of the week. But this is usually coupled with me thinking that I have made some sort of progress - I just also realised that some parts will be harder than I thought, or that I should do a more thorough job than I’d planned, or something like that. (But I feel like maybe I'm just interpreting your question differently to what you intended.)
4Dawn Drescher2y
In a private conversation we figured out that I may tend too much toward setting specific goals and then only counting achievement of these goals as success ignoring all the little things that I learn along the way. If the goal is hard to achieve, I have to learn a lot of little things on the way and that takes time, but if I don’t count these little things as little successes, my feedback gets too sparse, and I lose motivation. So noticing little successes seems valuable.
8. Emotional motivators (Disclaimer: I'm just reporting on my own experience, and think people will vary a lot in this sort of area, so none of the following is even slightly a recommendation.) In general: * Personally, I seem to just find it pretty natural to spend a lot of hours per week doing work-ish things * I tend to be naturally driven to “work hard” (without it necessarily feeling much like working) by intellectual curiosity, by a desire to produce things I’m proud of, and by a desire for positive attention (especially but not only from people whose judgement I particularly respect) * That third desire in particular can definitely become a problem, but I try to keep a close eye on it and ensure that I’m channeling that desire towards actions I actually endorse on reflection * I do get run down sometimes, and sometimes this has to do with too many hours per week for too many weeks in a row. But the things that seem more liable to run me down are feeling that I lack sufficient autonomy in what I do, how, and when; or feeling that what I’m doing isn’t valuable; or feeling that I’m not developing skills and knowledge I’ll use in future * That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again. * In these cases, even if I know that finishing that project to a high standard would still be valuable and is worth spending time on, it can be hard for me to be internally motivated to do so, because it no longer feels like doing so would “level me up” in ways I care about. * I seem to often become intensely focused on a general area in an ongoing way (until something switches my focus to another area), and just continually think about it
2Dawn Drescher2y
Awesome! For me the size of an area plays a role for how long I have a high level of motivation for it. When you’re studying a board game, there are a few activities, they are quite similar, and if you try out all of them it might be that you run out of motivation within a year. This happened to me with Othello. But computer science or EA are so wide that if you lose motivation for some subfield of decision theory, you move on to another subfield of decision theory, or to something else entirely, like history. And there are probably a lot of such subareas where there are potentially impactful investigations waiting to be done. So it makes sense to me to be optimistic about having long sustained motivation for such a big field. My motivation did shift a few times, though. I think before 2012 it was more a “This is probably hopeless, but I have to at least try on the off-chance that I’m in a world where it’s not hopeless.” 2012–2014 it was more “Someone has to do it and no one else will.” After March 28, 2014, it was carried a lot by the sudden enormous amount of hope I got from EA. On October 28, 2015, I suddenly lost an overpowering feeling of urgency and became able to consider more long-term strategies than a decade or two. Even later, I became increasingly concerned with coordination and risk from regression to the (lower) mean.
9. Typing speed I'd be surprised if typing speed was a big factor explaining differences in how much different researchers produce, or in their ability to produce certain types of output. (But of course, that claim is pretty vague - how surprised would I be? What do I mean by "big factor?") But I just did a typing test [https://www.typingtest.com/], and got 92wpm (with "medium" words, and 1 typo), which is apparently high. So perhaps I'm just taking that for granted and not recognising how a slower typing speed could've limited me. Hard to say.
6. Learning a new field I don’t know if I have a great, well-chosen, or transferable method here, so I think people should pay more attention to my colleagues’ answers than mine. But FWIW, I tend to do a mixture of: * reading Wikipedia articles * reading journal article abstracts * reading a small set of journal articles more thoroughly * listening to podcasts * listening to audiobooks * watching videos (e.g., a Yale lecture series on game theory) * talking to people who are already at least sort-of in my network (usually more to get a sounding board or “generalist feedback”, rather than to leverage specific expertise of theirs) I’ve also occasionally used free online courses, e.g. the Udacity Intro to AI course. (See also What are some good online courses relevant to EA? [https://forum.effectivealtruism.org/posts/u2DM4Xfbj2CwhWcC5/what-are-some-good-online-courses-relevant-to-ea] ) Whether I take many notes depends on whether I'm just learning about a field because I think it might be useful in some way in future for me to know about that field, or because I have at least a vague idea of a project I might work on within that field (e.g., "how bad would various possible types of nuclear wars be, from a longtermist perspective?"). In the latter case, I'll take a lot of notes as I go in Roam, beginning to structure things into relevant sub-questions, things to learn more about, etc. Since leaving university, I haven’t really made much use of textbooks, flashcards, or reaching out to experts who aren’t already in my network. It's not really that I actively chose to not make much use of these things (it’s just that I never actively chose to make much use of these things), and think it’s plausible that I should make more use of these things. I’ll very likely talk to a bunch of experts for my current or upcoming research projects.
4Adam Binks2y
These are fascinating, I would love to see answers to all of these questions!
2Dawn Drescher2y
Wow! Thanks for all the insightful answers, everyone! Would anyone mind if I transfer these into a post on my blog (or a separate post in the EA Forum) that is linear in the sense that there is one question and then all answers to it, then the next question and all answers to it, and so on? That may also generate more attention for these answers. :-)
Yeah, this would be nice to have! It's a lot of text to digest as it is now and I guess most people won't see it here going forward
Sure, in general feel free to assume that anything I write that's open to the public internet is fair game.
Yeah, same for me.
2Jason Schukraft2y
That's fine by me!
I think it would be valuable to publish these as a sequence of questions on the forum and let others chime in and have a more thorough discussion. Perhaps even separated through time, say 1 or two per week
  • If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
  • What new charities do you want to be created by EAs?
  • What are the biggest mistakes Rethink Priorities did?

Thank you!

What are the biggest mistakes Rethink Priorities did?

I can’t speak for the entire organization, but I can talk about what I see as my biggest mistakes since I started working at Rethink Priorities:

  1. Writing articles about interventions I think are promising and thinking that my work is done once the article is published. Examples are baitfish (see the comment above), fish stocking, rodents farmed for pet snake food. The way I see things now, if I think that something should be done, I should express that opinion very clearly and with fewer caveats, find funders who want to fund it, find activists that want to do it, and connect them. Or something like that. And that is the kind of work I am doing at the moment, even though I think I am much better at writing articles than at doing this.
  2. Avoiding expressing opinions too much. It’s related to the point above. I think that in the past I was too afraid of writing something that could later turn out to be wrong. Hence, I wrote articles in such a way that sometimes the reader could not even know what I think about a problem I am writing about, how important I think it is in the context of other things, etc. I wanted decision makers to
... (read more)

Saulius, just wanted to comment that while I haven't devoted the time to read in detail most of your research, I have noticed and greatly appreciated that you have contributed a LOT of useful knowledge to EAA over the past several years. Yours is a name I've recognized in EAA since its early days. I am glad that you're shifting to express your opinions more strongly so that more action can be taken on all of the wonderful research you've contributed. I've gotten the sense that you take these issues very seriously, are super motivated to address them, and don't get pulled into more trivial things, and I greatly admire and am inspired by you for that.

Re (6), I hope that you can be proud of what you've done and decrease your negative self-talk. Take care of yourself. I'd be curious to hear if meditation ends up helping out with this.

I found this response  insightful and feel like it echoes mistakes I've made as well; really appreciate you writing it.

What new charities do you want to be created by EAs?

For me it's a lobbying organization against baitfish farming in the U.S. I wrote about the topic two years ago here. Many people complimented me on it but no one did anything. I talked with some funders who said they would be interested in funding someone suitable pursuing this, but I haven’t found who could this be. The main argument against it used to be that the industry is declining. But the recently released aquaculture census suggests that it is no longer declining (see my more recent thoughts on numbers here).

Using fish as live bait is already prohibited in some U.S. states (see the map in Kerr (2012)). Many other states have import and movement restrictions (see this table). It seems that all of this happened due to environmental concerns. And the practice is banned in multiple other countries. To me this shows that it is plausible to make progress on this.

Take a look at this graph I made of the number of animals farmed in the U.S. at any time.


I used yellow and black colours to represent ranges. So for example, I think that there are between 1 billion and (5+1=)6 billion baitifsh farmed in the U.S. at any time. It’s mor... (read more)

Maybe Aquatic Life Institute or Fish Welfare Initiative would work on this. I'm not sure if they're already aware. I think it would be closer to ALI's work.
Thanks for suggestions Micheal. Haven from FWI is actually helping me to do research on this in his free time. He said that FWI would be open to putting someone who would work on this under their organization if given funding, but not to redirecting the time of the current staff towards the project. This makes sense because they want to continue with the work that they have started doing, and they are not experts on lobbying and I think few if any of them are located in the U.S. I haven’t talked about this with ALI yet (you are right, I should), but from what I hear, I think that they also don’t have expertise in U.S. lobbying, are mostly not located in the U.S., and would probably not want to redirect current staff time to new projects. I don’t know how much previous lobbying experience is important here but my sense is that it is. I feel that what is needed is a person (or two) who would be suitable for leading this, and then we could figure out all the organizational and funding stuff.

Hi Saulius, thank you for your comment! To add some more context, ALI is based in New York, but we indeed have a global team. I'm very glad you're bringing up baitfish. Our focus for 2020 was the creation of the Aquatic Animal Alliance, the drafting of our coalition welfare standards and the launch of our certifier campaign. We've done great progress on all of them, and actually already had our first victory with GlobalGAP (which certifies more than 1% of the global aquaculture market). For next year, we plan on continuing our certifier campaign but also wanted to pursue 2 additional campaigns through the Alliance: lobbying and a fish restocking campaign. On the lobbying front, we've already been active in France and plan to do more work there and at the EU level. Regarding fish restocking, we plan on starting working with US states departments of Fish and Wildlife to get them to adopt some or all of our welfare standards. We have already contacted vets who work at these agencies; and through our producer sentiment roundtables we organized in the fall, we have already found fish restocking producers who also are open to working with us.  I'm really glad you bringing up baitfis... (read more)

Thank you very much William for your comment! I will follow up with you in private but there are few things that I thought would be suitable to say/ask here as well. It was very recently brought to my attention that baitfish seems to also be farmed in France and that there is an animal advocacy organization that has a petition on it (see here [https://translate.google.com/translate?sl=auto&tl=en&u=https://zoopolis.fr/petition-pour-que-decathlon-cesse-de-vendre-des-poissons/?fbclid%3DIwAR0hxaOGiW_3sqJirn9gpUyKDm1e3aV-nzMNRtjW_vGEo8hMo5DHJAE_rVs] and here [https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fwww.fondation-droit-animal.org%2F105-peche-au-vif-vivement-la-fin%2F] ). I don’t know what is the scale of baitfish farming in France or in any country other than the U.S., so I don’t yet know if it is an issue I would recommend tackling in France. I just thought I should mention that in case you or someone else could be interested in doing some lobbying on this issue there. Also, at Rethink Priorities we try to track any possible impact we had on the projects of animal welfare organizations. So I wanted to ask, do you think you would have worked on fish restocking if this article was never written? And please don’t hesitate to say that you knew about the industry and its size independently of that article and it had nothing to do with it, if that is the case :)
Thanks Saulius, it actually so happens that the organization running the baitfish petition in France, Paris Animaux Zoopolis, was founded by Amandine Sanvisens... who is also the director of ALI in France! But, and that goes to your next point, we were not aware of the relative scale of baitfish farming; so if we do end up prioritizing it over another intervention, the credit for the additional impact of doing that campaign over the one we would have done otherwise would go to you and RP! Would love to chat more and we'll keep you updated.
Good to know. I've talked to Gautier who wrote the French article I linked to, and he said he had already tried to figure out the scale of the industry in France, but didn't manage to find stats on it. However, he said that there are indications that it is a small industry compared to the U.S. He said there was work on it mostly due to legal precedent reasons rather than direct impact.

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

Contrary to organizations like OPIS, Center for Reducing Suffering, and Center on long-term risk, we don't have reducing extreme suffering set as our only priority. We sometimes work on reducing suffering that may not be classified as extreme (arguably, our work on cage-free hen campaigns fall into this category). And perhaps some other work is not directly about reducing suffering at all. Since preventing extreme suffering is not our only priority, I think that we are unlikely to be the best donation opportunity for this specific goal. That said, when I look at the list of our publications, I think that almost all the articles we write contribute to the goal of preventing needless and extreme suffering in some way, although in many cases it is quite indirect. In the end, we are not able to compare whether or not Rethink Priorities is a better donation opportunity for this purpose than other organizations in an unbiased way.

Thanks for the questions!

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.

Specifically, I'd say it's plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.

Still restricted to the present, and outside ... (read more)

I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains. Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.
I like the answers Marcus and Saulius gave to this question. I'll just add two things those answers didn't explicitly mention. EA movement-building * Rethink has done and plans to do work aimed at improving efforts to build the EA movement and promote EA ideas * E.g., Rethink's work on the EA Survey, or its plans related to [https://forum.effectivealtruism.org/posts/33AnPajNYmNrdXQbj/rethink-priorities-2020-impact-and-2021-strategy#Project_Plans] : * "Further refining messaging for the EA movement, exploring different ways of talking about EA to improve EA recruitment and increase diversity. * Further work to explore better ways to talk about longtermism to the general public, to help EAs communicate longtermism more persuasively and to increase support for desired longtermist policies in the US and the UK." * And building the EA movement and promoting EA ideas seems like plausibly one of the best interventions for reducing needless/extreme/all suffering * E.g., building the EA movement could increase the flows of talent and funds to existing suffering-focused [https://longtermrisk.org/the-case-for-suffering-focused-ethics/] EA organisations (such as CLR), lead to the creation of new ones, or lead to talented people using their careers to effectively reduce suffering in other ways (e.g., through specific roles in government or AI labs) * E.g., promoting EA ideas (even without "building the EA movement") could lead to a general shift in voting, policies, behaviours towards reducing suffering Forecasting * Rethink plans to [https://forum.effectivealtruism.org/posts/33AnPajNYmNrdXQbj/rethink-priorities-2020-impact-and-2021-strategy#Project_Plans] "Use novel econometric methods to better understand our ability to reliably impact the long-term future", as well as to "Improve our ability to forecast the

What are the things you look for when hiring? What are some skills/experiences that you wish more EA applicants had? What separates the "top 5-10%" of EA applicants from the median applicant?

Thanks for the question!

We hire for fairly specific roles, and the difference between those we do hire and don't isn't necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).

That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.

For these reasons, it's difficult to say with precision which skills I'd hope for more of among EA researchers. With those caveats, I'd still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.

What would you do if Rethink Priorities had significantly more money? (Eg, 2x or 10x your current budget)

Hi Neel,

We'd obviously be very excited to take 10x our budget if you're offering ;)

Right now, 10x our budget would be ~$14M, which would still be 8x smaller than large think tanks like the Brookings Institution. I think if we had 10x the budget, the main thing we would do is expand our research staff as rapidly as non-financial constraints (e.g., management, operations, and team culture) allow.

There are definitely many more areas of research we could be working in, both within our existing cause areas (currently farmed animal welfare, wild animal welfare, invertebrate welfare, longtermism, and EA movement building) and other cause areas we aren't working in yet. We'd also need more operations staff and management to facilitate this.

As for specific research questions, I think we have a much clearer vision of what we would do with 2x the money than 10x the money. I personally (speaking for myself not the rest of the org) would love to see us hire staff to work more directly on farmed animal welfare policy and to investigate meat alternatives, do much more to understand EA community health and movement building, do more fundamental research (e.g., like our work on moral weight and inv... (read more)

You mentioned in your 2021 update that you're starting a research internship program next year (contingent on more funding) in order to identify and train talented researchers, and therefore contribute to EA-aligned research efforts (including your own). 

Besides offering similar internships, what do you think other EA orgs could do to contribute to these goals? What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?

Hi Arushi,

I am very hopeful the internship program will let us identify, take on, and train many more staff than we could otherwise and then either hire them directly or be able to recommend them to other organizations.

While I am wary of recommending unpaid labor (that's why our internship is paid), I otherwise think one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I've seen a lot of great hires distinguish themselves like this.

Other than open more researcher jobs and internships, I think other EA orgs could perhaps contribute by writing advice and guides about research processes or by offering more "behind the scenes" content on how different research is. done.

Lastly, in my personal opinion, I think we should also do more to create an EA culture where people don't feel like the only way they can contribute is as a researcher. I think the role gets a lot more glamor than it deserves and many people can contribute a lot from earning to give, working in academia, working in politics, working in a non-EA think tank, etc.

I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early  Career Conference Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I'm very excited to see how our internship program develops as I really enjoy mentoring.

I think I was competitive for the RP job because of my T-shaped skills, broad knowledge in lots of EA-related things but also specialised knowledge in a specific useful area, economics in my case. Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years, but in general reading things on the Forum and actively discussing them with other people (perhaps in a reading group) seems to be the way to develop in this area. Developing specialised skills obviously depends a lot on the skill, but graduate education and relevant internships are the most obvious routes here.

I already strongly agreed with your first paragraph in a separate answer, so I'll just jump in here to strongly agree with the second one too! I can confirm that I've been gobbling up EA content rather obsessively for the last 2 years. If anyone's interested in what this involved and how many hours I spent on it, I describe that here [https://forum.effectivealtruism.org/posts/2eEn8BHW69StPgtCi/how-have-you-become-more-or-less-engaged-with-ea-in-the-last?commentId=v5jwtvF29XoQyHzTw] .

What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?

There are some relevant answers in here and here.

I think this is a relatively minor thing, but trying to become close to perfectly calibrated (aka being able to put precise numbers on uncertainty) on some domains seem like a moderate-sized win, at very low cost. I mainly believe this because I think the costs are relatively low. My best guess is that the majority of EAs can become close to perfectly calibrated on trivia numerical questions in much less than 10 hours of deliberate practice, and my median guess is for the amount of time needed is around 2 (eg practice here [https://www.openphilanthropy.org/blog/new-web-app-calibration-training]). I want to be careful with my claims here. I think sometimes people have the impression that getting calibrated is synonymous with rationality, or intelligence, or judgement. I think this is wrong: 1. Concretely, I just don't think being perfectly calibrated is that big a deal. My guess is that going from median-EA levels of general calibration to perfect calibration on trivia questions is an improvement in good research/thinking by 0.2%-1%. I will be surprised if somebody becomes a better researcher by 5% via these exercises, and very surprised if they improve by 30%. 2. In forecasting/modeling, the main quantifiable metrics include both [https://www.researchgate.net/post/Calibration-vs-discrimination-which-one-more-important-in-a-prediction-model] a) calibration (roughly speaking, being able to quantify your uncertainty) and b) discrimination (roughly speaking, how often you're right). In the vast majority of cases, calibration is just much less important than discrimination. 3. There are generalizability issues with generalizing from good calibration on trivia questions to good calibration overall. The latter is likely to be much harder to train precisely, or even precisely quantify (though I'm reasonably confident that going from poor calibration on trivia to perfect calibration should generalize somewhat, D
Misc thoughts on "What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?" There was some relevant discussion here [https://forum.effectivealtruism.org/posts/bG9ZNvSmveNwryx8b/ama-owen-cotton-barratt-rsp-director?commentId=bXkfWkMdzm9yeuowE] . Ideas mentioned there include: * getting mentorship outside of EA orgs (either before switching into EA orgs after a few years, or as part of a career that remains outside of explicitly EA orgs longer-term) * working as a research assistant for a senior researcher I think the post SHOW: A framework for shaping your talent for direct work [https://forum.effectivealtruism.org/posts/EP6X362Q3ziibA99e/show-a-framework-for-shaping-your-talent-for-direct-work] is also relevant.
Hi Arushi, Good questions! I'll split some thoughts into a few separate comments for readability. Writing on the Forum I second Peter's statement that (Though in some cases it might make sense to publish the post to LessWrong instead or in addition.) This statement definitely seems true in my own case (though I imagine for some people other approaches would be more effective): I got a offer for an EA research job before I began writing for the EA Forum. But I was very much lacking in the actual background/credentials the org said they were looking for, so I'm almost certain I wouldn't have gotten that offer if the application process hadn't included a work test that let me show I was a good fit despite that relevant lack of background/credentials. (I was also lucky that the org let me do the work test rather than screening me out before that.) And the work test was basically "Write an EA Forum post on [specific topic]", and what I wrote for it did indeed end up as one of my first EA Forum/LessWrong posts. And then this year I've gotten offers from ~35% of what I've applied to, as compared to ~7% last year, and I'd guess that the biggest factors in the difference were: 1. I now had an EA research role on my CV, signalling I might be a fit for other such roles 2. Going from 1FTE non-EA stuff (teaching) in 2019 to only ~0.3FTE non-EA stuff (a grantwriting role I did for a climate change company on the side of my ~0.7FTE EA work till around August) allowed me a lot of time to build relevant skills and knowledge 3. In 2020 I wrote a bunch of (mostly decently/well received [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=TwGkLdy9wcuGgWnGh] ) EA Forum or LessWrong posts, helping to signal my skills and knowledge, and also just "get my name out there" * "getting my name out there" was not part of my original goal, but did end up happening, and to quite a surprising degree.
My own story & a disclaimer (This is more of a tangent than an answer, but might help provide some context for my other responses here and elsewhere in this AMA. Feel free to ignore it, though!) I learned about EA in late 2018, and didn't have much relevant expertise, experience, or credentials. I'd done a research-focused Honours year and published a paper, but that was in an area of psychology that's not especially relevant to the sort of work that, after learning about EA, I figured I should aim towards. (More on my psych background here [https://forum.effectivealtruism.org/posts/pYHZ8dhZWPCSZ66dX/i-knew-a-bit-about-misinformation-and-fact-checking-in-2017] .) I was also in the midst of the 2 year Teach For Australia program, which involves teaching at a high school, and also wasn't relevant to my new EA-aligned plans. Starting then and continuing through to mid 2020 ish, I made an active effort to "get up to speed" on EA ideas, as described here [https://forum.effectivealtruism.org/posts/2eEn8BHW69StPgtCi/how-have-you-become-more-or-less-engaged-with-ea-in-the-last?commentId=v5jwtvF29XoQyHzTw] . In 2019, I applied for ~30 EA-aligned roles, mostly research-ish roles at EA orgs (though also some non-research roles or roles at non-EA orgs). I ultimately got two offers, one for an operations role at an EA org and one for a research role. I think I had relevant skills but didn't have clear signals of this (e.g., more relevant work experience or academic credentials), so I was often rejected at the CV screening stage but often did ok if I was allowed through to work tests and interviews. And both of the offers I got were preceded by work tests. Then in 2020, I wrote a lot of posts on the EA Forum and a decent number on LessWrong, partly for my research job and partly "independently". I also applied for ~11 roles this year (mostly research roles, and I think all at EA orgs), and ultimately received 4 offers (all research roles at EA orgs). So that success rate was
Hey Michael, This is a tangent to your tangent, but are you still based in Australia? If so, how do you find Rethink's remote by default set up with the time difference? For context, I considered applying for the same role, but ultimately didn't because at the time I was stuck working from Australia with all my colleagues in GMT+0 timezone (thanks covid), and the combination of daytime isolation/late night meetings were making me pretty miserable. Is Rethink better at managing these issues? Cheers!
8Peter Wildeford2y
Just want to say that Rethink Priorities is committed to being able to successfully integrate remote Australians and we'd be excited to have more APAC applicants in our future hiring rounds!
Hey Harriet, Good question. And sorry to hear you had that miserable situation - hope things are better for you now! First, I should note that I’m in Western Australia, so things would presumably be somewhat different for people in the Eastern states. Also, of course, different people’s needs, work styles, etc. differ. I’ve been meeting with US people in my mornings, which is working well because I wake up around 7am and start working around 8, while the people I’m meeting with are more night-owl-ish. And I’ve been meeting with people in the UK/Europe in my evenings (around 5-9pm), which I’m also fine with. Though it is tricky to get all 3 sets of time zones in the same meeting. Usually one of us has to be up early or late. But so far those sort of group meetings have just been something like once a fortnight, so it’s been tolerable. And other than meetings, time zones aren’t seeming to really matter for my job; most of my work and most of my communication with colleagues (via slack, google doc comments, email, etc) doesn’t require being up at the same time as someone else. (I imagine that, in general, this is true for many research roles and less true for e.g. operations roles.) Though again, I’ve only been at Rethink for a month so far. And I’m planning to move to Oxford in March. If I was in Australia permanently, perhaps time zone issues for team meetings would become more annoying. Btw, I also worked for Convergence Analysis (based in UK/Europe) from March to ~August from Australia. That was even easier, because there were never three quite different time zones to deal with (no US employees).
Thanks for the detailed answer - this actually sounds pretty doable!
Research training programs, and similar things (You said "Besides offering similar internships". But I'm pretty excited about other orgs running similar internships, and/or running programs that are vaguely similar and address basically the same issues but aren't "internships". So I'll say a bit about that cluster of stuff, with apologies for sort-of ignoring instructions!) David wrote: I second all of that, except swapping GPI's Early Conference Career Programme (which I haven't taken part in) for the Center on Long-Term Risk's Summer Research Fellowship. I did that fellowship with CLR from mid August to mid November, found it very enjoyable and useful. I recently made a tag [https://forum.effectivealtruism.org/tag/research-training-programs] for posts relevant to what I called "research training programs". By this I mean things like FHI and CLR's Summer Research Fellowships, Rethink Priorities' planned internship program, CEA's former Summer Research Fellowship, probably GPI's Early Career Conference Programme, probably FHI's Research Scholars Program, maybe the Open Phil AI Fellowship, and maybe ALLFED's volunteer program [https://forum.effectivealtruism.org/posts/KS3eA4SK45vcir7Gg/case-study-volunteer-research-and-management-at-allfed] . Readers interested in such programs might want to have a look at the posts with that tag. I think that these programs might be one of the best ways to address some of the main bottlenecks in EA or at least in longtermism (I've thought less about areas of EA other than longtermism). What I mean is related to the claim that EA being vetting-constrained [https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained] , and to Ben Todd's claim [https://80000hours.org/podcast/episodes/ben-todd-on-what-effective-altruism-most-needs/] that some of EA's main bottlenecks at the moment are "organizational capacity, infrastructure, and management to help train people up". There was also some related discussion

Conditional on invertibrates being sentient, I would upgrade my probability of other things being sentient. So maybe bivales are sentient, some existing robots, maybe even plants. I would take the case for hidden qualia in humans seriously as well. Do you agree, and if so, would this have any impact on good policies to pursue? 

Hi Roger,

There are different possible scenarios in which invertebrates turn out to be sentient. It might be the case, for instance, that panpsychism is true. So if one comes to believe that invertebrates are sentient because panpsychism is true, one should also come to believe that robots and plants are sentient. Or it could be that some form of information integration theory is true, and invertebrates instantiate enough integration for sentience. In that case, the probability that you assign to the sentience of plants and robots will depend on your assessment of their relevant level of integration.

For what it's worth, here's how I think about the issue: sentience, like other biological properties, has an evolutionary function. I take it as a datum that mammals are sentient. If we can discern the role that sentience is playing in mammals, and it appears there is analogous behavior in other taxa, then, in the absence of defeaters, we are licensed to infer that individuals of those taxa are sentient. In the past few years I've updated toward thinking that arthropods and (coleoid) cephalopods are sentient, but the majority of these updates have been based on learning new empirical inf... (read more)

Thank you. That is rather different from my view of sentience in some ways, I appreciate the clarification.

Regarding the following research areas for 2021:

Further refining messaging for the EA movement, exploring different ways of talking about EA to improve EA recruitment and increase diversity.

Further work to explore better ways to talk about longtermism to the general public, to help EAs communicate longtermism more persuasively and to increase support for desired longtermist policies in the US and the UK.

  • What kind of research do you plan on doing to answer these questions?
  • Did you consider other areas of EA movement building apart from messaging before choosing this one, and if so how did you narrow down your options?
  • Do you see general EA messaging as part of your longtermist focus, or is this a separate category? Either ways, how do you figure out how to allocate resources to this movement building-related efforts? 

Hi Vaidehi,

What kind of research do you plan on doing to answer these questions?

I will be working on both of these projects with David Moss. Our plan is to run surveys of the general public that describe EA (or longtermism) and ask questions to gauge how people view the message. We'd then experimentally change the message to explore how different framings change support, with the idea that messages that engender more support on the survey are likely to be more successful overall. For EA messaging, we'd furthermore look at support broken down by different demographics to see if there are more inclusive messages out there. We did a similar project we did for animal welfare messaging on live shackle slaughter, which you can look at to get a sense of what we do. We also have a lot of unpublished animal welfare messaging work we're eager to get out there as soon as we can.


Did you consider other areas of EA movement building apart from messaging before choosing this one, and if so how did you narrow down your options?

As you know, we do run the EA Survey and Local Groups Survey. Right now, our main goal is to stay within analysis of EA movement building rather than work to directly build... (read more)

If you had to choose just three long-termist efforts as the highest expected value, which would you pick and why?

(Speaking for myself and not others on the team, etc)

 At a very high level, I think I have mostly "mainstream longtermist EA" views here, and my current best guess would be that AI Safety, existential biosecurity, and cause prioritization (broadly construed) are the highest EV efforts to work on overall, object-level. 

This does not necessarily mean that marginal progress on these things are the best use of additional resources, or that they are the most cost-effective efforts to work on, of course.

9Peter Wildeford2y
This is not a satisfying answer but right now I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize. I also think we should spend a lot more resources on figuring out if and how much we can expect to reliably influence the long-term future, as this could have a lot of impact on our strategy (such as becoming less longtermist or more focused on broad longtermism or more focused on patient longtermism, etc.). I don't have a third thing yet, but both of these projects we are aiming to do within Rethink Priorities.
(Just my personal views, as always) Roughly in line with Peter's statement that "I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize", I recently argued [https://forum.effectivealtruism.org/posts/ham9GEsgvuFcHtSBB/should-marginal-longtermist-donations-support-fundamental-or] (with some caveats and uncertainties) that marginal longtermist donations will tend to be better used to support "fundamental" rather than "intervention" research. On what those terms mean, I wrote: See that post's "Key takeaways" for the main arguments for and against that overall position of mine. I think I'd also argue that marginal longtermist research hours (not just donations) will tend to be better used to support fundamental rather than intervention research. (But here personal fit becomes quite important.) And I think I'd also currently tend to prioritise "fundamental" research over non-research interventions, but I haven't thought about that as much and didn't discuss it in the post. So the highest-EV-on-the-current-margin efforts I'd pick would probably be in the "fundamental research" category. Of course, these are all just general rules, and the value of different fundamental research efforts, intervention research efforts, and non-research efforts will vary greatly. In terms of specific fundamental research efforts I'm currently personally excited about, these include analyses, from a longtermist perspective, of: * totalitarianism/dystopias [https://forum.effectivealtruism.org/tag/dystopias-and-totalitarianism], * world government (see also [https://forum.effectivealtruism.org/tag/global-governance]), * civilizational collapse and recovery [https://forum.effectivealtruism.org/tag/civilizational-collapse-and-recovery] , * "the long reflection" [https://forum.effectivealtruism.org/tag/long-reflection], and/or * long-term risks from malevolent actors [https:/

How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?

Hey Josh, thanks for the question! From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered. At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.

In this report on bottlenecks in the X-risk research community, the main suggestion was to improve the senior researcher pipeline. What do you think about the senior researcher pipeline in prioritization research?

I think it would always be good to have more senior researchers, but they seem rather hard to find. Right now, my personal view is that the best way to build senior researchers is to hire and train mid-level or junior-level researchers. We hope to keep doing this with our past hires, existing hires, and our upcoming intern program.

If you're interested in funding researcher talent development, I think funding our intern program is a very competitive opportunity.

I haven’t read that report in full, but I imagine that it's such a big issue in the X-risk research because it grew very quickly from an obscure field, to a field with a lot of funding available and a lot of people wanting to work in it. I think it’s a rare situation, and I don't feel that it's a significant problem in the kind of research that I do (farmed animal welfare). I remember hearing that it is a problem in cultured meat R&D though, and it makes sense, the situation is similar.
That makes sense, thanks. I've checked with people in cultured meat, and they seem to agree with you - e.g. startup companies are looking for hires that have broadly relevant PhD experience (less than what I'd count senior) and some major companies have a single scientific advisor who, while being accomplished academics, are not very familiar with the field.

Do you think that you have received valuable feedback on your work by posting it on the forum? If you did, did most of it come from people in your existing network?

Hey Edo,

I definitely receive valuable feedback on my work by posting it on the Forum, and the feedback is often most valuable when it comes from people outside my current network. For me, the best example of this dynamic was when Gavin Taylor left extensive comments on our series of posts about features relevant to invertebrate sentience (here, here, and here) back in June 2019. I had never interacted with Gavin before, but because of his comments, we set up a meeting, and he has become an invaluable collaborator across many different projects. My work is much improved due to his insights. I'm not sure Gavin and I would ever have met (much less collaborated) if not for his comments on the Forum.

Hi Edo, I definitely think I’ve received valuable feedback on my work on the EA Forum, as well as on LessWrong. This feedback came in the form of upvotes/downvotes, comments on my posts, and private messages/discussions that people had with me as a result of me having posted things. It’s harder to say: 1. In what ways was that feedback valuable? 2. Precisely how valuable was it? How does that compare to other sources of feedback? 3. How did the value vary by various factors (e.g., EA Forum vs LessWrong, posts that are more like summaries vs posts that are more like “original research”)? 4. What proportion of that came from people in my existing network? Some thoughts on those points follow. (But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback, including to share potentially useful ideas, to signal one’s skills/knowledge to aid in later job applications, and make connections with other EAs; more on this in this other comment of mine [https://forum.effectivealtruism.org/posts/WJ3rBBkawoTJcY642/ask-rethink-priorities-anything-ama?commentId=sbiaCkZebDS6ahNvf] .) Note that all of the following relates to posts I made before joining Rethink, as I haven't yet posted anything related to my work with Rethink. Q1: Valuable in what ways? * Maybe the main way the feedback was useful was in helping me get an overall sense of how I was doing as an EA researcher, how I was doing as a macrostrategy researcher, and how valuable the kinds of work I was doing were, to inform whether to carry on with those things. * That said, I think votes and comments provided less useful feedback on these points than I’d have expected. That feedback basically just seemed to indicate “You’re probably neither a terrible fit for this nor an amazing wunderkind, but rather somewhere in the vast chasm in between.” Which I guess did narrow my uncertainty slightly, but not very
Thanks! I found it very interesting that one of the most important feedback was on how you were doing as a researcher, and that the most important feedback was from the survey. I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I'll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated). Also, thinking on Q4, I think that I might be worried that as people's personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished. Generally, though, it seems like you didn't find engagement with the content itself very useful, which is about what I'd have guessed but unfortunate to hear. (btw, reminding you to link to this comment from here [https://forum.effectivealtruism.org/posts/WJ3rBBkawoTJcY642/ask-rethink-priorities-anything-ama?commentId=sbiaCkZebDS6ahNvf] )
Yeah. I think it's great that people can build networks of people with relevant interests and expertise and get thoughtful feedback from those networks, but also a shame if that means that people don't take the little bit of extra time to post work that's already been done and written up. I think that this sort of thing is why I wanted to say "But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback...". I plan to indefinitely continue posting publicly except in cases (which do exist) where there are specific reasons not to do so,[1] such as: * potential infohazards [https://www.lesswrong.com/posts/R7szBR5H487XutfKy/what-are-information-hazards] * the piece of writing is likely to be more polished and useful in future, so I'm deferring posting it till then * In cases where the work isn't fully polished but the writer has no plans to ever polish it, I'd say it's often worth posting anyway with some disclaimers, and letting others just decide for themselves whether to bother reading it * there are reasons to believe the work will confuse or mislead people more than it informs them (see also [https://forum.effectivealtruism.org/posts/pYHZ8dhZWPCSZ66dX/i-knew-a-bit-about-misinformation-and-fact-checking-in-2017?commentId=tLokkQXf7M6WpNW6f#comments] ) (Tangentially, I also feel like it's a shame when people do post EA-relevant work publicly, but just post it on their personal blog or their organisation's website or something, without also crossposting it to the Forum. It seems to me that that unnecessarily increases how hard it can be for people to find relevant info.) [1] This sentence used to say "I plan to indefinitely continue posting publicly unless there are specific reasons not to do so, such as:" (emphasis added). That was more ambiguous, so I edited it.
I think the reasons people don't post stuff publicly isn't out of laziness, but because there's lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.
(Just speaking for myself, as always) I definitely agree that there are many cases where it does make sense not to post stuff publicly. I myself have a decent amount of work which I haven't posted publicly. (I also wrote a small series of posts [https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE] earlier this year on handling downside risks and information hazards, which I mention as an indication of my stance on this sort of thing.) I also agree that laziness will probably rarely be a major reason why people don't post things publicly (at least in cases where the thing is mostly written up already). I definitely didn't mean to imply that I believe that laziness is the main reason people don't post things publicly, or that there are no good reasons to not post things publicly. But I can see how parts of my comment were ambiguous and could've been interpreted my comment that way. I've now made one edit to slightly reduce ambiguity. So you and I might actually have pretty similar stances here. But I also think that decent portions of cases in which a person doesn't post publicly may fit one of the following descriptions: * The person sincerely believes there are good reasons to not post publicly, but they're mistaken. * But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn't have (e.g., for reasons related to infohazards or the unilateralist's curse). * I'm not sure if people err in one direction more often than the other, and it's probably more useful to think about things case by case. * The person overestimates the risks posting publicly posing to their own reputations, or (considered from a purely altruistic perspective) overweight risks to their own reputations relative to potential benefits to others/the world (basically because the benefits are mostly externalities while the risks aren't). * That
I didn't mean to imply that laziness was the main part of your reply, I was more pointing to "high personal costs of public posting" as an important dynamic that was left out of your list. I'd guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don't think we disagree on the overall list of considerations.
Yeah, that sounds to me like it could be handy! It also would've been useful (or at least comforting) if I'd known that, if I was doing badly and seemed to be a bad fit, I'd get a clear indication of that. (It'd obviously suck to hear it, but thenI could move on to other pursuits.) Otherwise it felt hard to update in either direction. But I think it's much easier and less risky to just make it more likely that people would get clear indications when they are doing well than when they aren't, for a wide range of reasons (including that even people who are capable of being great at something might not clearly display that capability right away). I think I agree with what you mean, but that this phrasing give someone the wrong impression. I definitely appreciated the engagement that did occur, and often found it useful. The problems were more that: * Often there just wasn't much engagement. Maybe like some upvotes, 0-1 downvotes, 0-4 short comments. * It's very hard to distinguish "These 3 positive comments are from the 3 out of (let's say) 25 readers who had an unusually positive opinion about this or want to be welcoming, and the others thought this sort-of sucked but couldn't be bothered saying so or didn't want to be mean" from "These 3 positive comments are totally sincere, and the other (let's say) 22 readers also thought this was great but didn't bother commenting or felt it'd be weird to just comment 'this is great!' without saying more" * And that's not the fault of those 3 commenters. And it would feel harsh to say it's the fault of the (perhaps imagined) other 22 readers either. Thanks! Done.
It seems worth emphasising here that, before 2020: * I'd only done ~0.5FTE years of research before 2020, and it was in an area and methodology that's not very relevant to what I'm doing now * I hadn't started my "EA-aligned career" * (More on this in this comment [https://forum.effectivealtruism.org/posts/WJ3rBBkawoTJcY642/?commentId=uQx7HNJdx76x4aRPR] ) Therefore, for most of this year I've seen myself as more in "explore" than "exploit" mode. As I gradually move more towards the "exploit" end of that continuum, I'd guess that: * I'll have less need of feedback that just gives me an overall sense of whether I'm a good fit for X * The value of feedback that improves a given piece of work (e.g., points out mistakes or angles that should be explored more or clarified) will rise, because the direct value of the individual pieces of work I'm doing is higher This reminds me of some education researchers emphasising that the purpose of feedback in the context of high schools is to improve the student, not the piece of work. This makes sense, because the essay that 15 year old wrote isn't going to affect any important decisions, but the 15 year old may later do useful things, and has a lot to learn in order to do so. But in other contexts, a given piece of writing may be likely to influence important decisions, and the writer may already be more experienced. In those cases, it might make sense for feedback to focus on improving the piece of writing rather than the writing.

Have you had experience with using volunteers or outsourcing questions to the broad EA community? How was it? 

I did try it on some occasions with people who wanted to do research similar to the kind of research that I do. I think that it saved me less time than the time it took me to think of good questions to outsource and explain everything, and so on. This might be partly because there is a skill in outsourcing that I haven't mastered yet. I don't know if it helped anyone to decide whether they should pursue this type of career. If it did, then it was very much worth it.

One way I used volunteers (and friends whom I forced to volunteer) productively was making them read texts that I wrote and asking to comment aloud (not in writing) on everything that is at least slightly unclear. Then I didn't explain, but rewrote that part, and asked them to read again and asked if they understand it now. I found that this is important for texts that contain some complicated ideas/reasoning. E.g., it was very useful for the explanation of optimizer's curse and other things in this article. Not important for simple texts.

I also tried organizing some brainstorming sessions with members of the EA community. It was a bit useful, though I'm not sure it was wroth it (despite great participants), mostly because I get stressed about running events and then overprepare. And also because it would have taken too much time to explain all the relevant context in which I needed ideas. I think that in the right hands and the right situation, this is a tool that could be used productively though.
Hey Edo, thanks for the question! We've had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it's something that we're better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor. We've not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I'm not sure this is what you meant, but we've also partnered with Metaculus on some forecasting questions.
  • What is your stance regarding aiming your output at an EA audience vs. a wider audience? (Academic & governmental audiences, etc.?)
  • It seems that a large portion of output begins on your blog and in EA Forum posts. What other venues do you aim at, if any?
  • To what extent do you regard tailoring your work to academic journals with "peer-review" as counterfactually worthwhile?

How do you manage research questions? Do you have some sort of an internal list of relevant questions? I'd also love to hear about specific examples of decisions to pursue or discard a research question.

Thanks for the question, Edo! We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.
Thank you! I have some followup questions if that's ok :) Is it reasonable to publicly publish the list or some of it? How do you prioritize and select them? Do the suggestions to pursue a project come from the managers or the researchers? If they sometimes come from the researchers, do you have any mechanisms in place to motivate the researchers to explore the list or does it happen naturally?

What are some possible efforts within prioritization research that is outside your scope and you'd like to see more of?

I'm not confident that this is fully outside the scope of RP, but I think backchaining [http://www.aaronsw.com/weblog/theoryofchange]-in-practice is plausibly underrated by EA/longtermism, despite a lot of chatter about it in theory. By backchaining in practice I mean tracing backwards fully from the world we want [https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/linch-s-shortform?commentId=n4udx7stWB4udtRCp] (eg a just, kind, safe world capable of long reflection), to specific efforts and actions individuals and small groups can do, in AI safety, biosecurity, animal welfare, movement building, etc. Specific things that I think will be difficult to be under RP's purview include things that require specific AI Safety or biosecurity stories, though those things plausibly have information hazards so I'd encourage people who are doing these extensive diagrams to be a) somewhat careful about information security and b) talk to the relevant people within EA (eg FHI) before creating and certainly before publishing them. An obvious caveat here is that it's possible many such backchaining documents exist and I am unaware of them. Another caveat is that maybe backchaining is just dumb, for various epistemic reasons.
6Peter Wildeford2y
I'm not really sure what is included in the scope of "prioritization research". One thing we definitely do not do and very likely will never do, and that I am glad others do is technical AI safety research. Other than that, I think pretty much anything in longtermism could be fair game for Rethink Priorities at some point.
I am surprised that you mention technical AI Safety as something you don't do under what I consider "prioritization research", which I didn't before posting my question was apparently a concept I used mostly internally 😊 Linch's mention of it below was in the context of understanding it's importance rather than trying to solve it, which I guess is how I'd carve up "prioritization research". I guess that for similar reasons I'd expect RP to focus less on solving (longtermist or other) problems. Just to make sure, could examples like the following be in RP's scope if you had the right people/situation? 1. Suggesting safe ways to use certain geoengineering mehcanisms. 2. Developing methods for increased empathy toward future people. 3. Proposing and defining a governmental institute for future generations. 4. Developing economic models for incentives of great power war under futuristic scenarios like space expansion and proposing mechanisms to manage the risk of war.
I think what counts as prioritization vs object-level research of the form "trying to solve X" does not obviously have clean boundaries, for example a scoping paper like Concrete Problems in AI Safety [https://arxiv.org/abs/1606.06565] is something that a) should arguably be considered prioritization research and b) is arguably better done by somebody who's familiar with (and connected in) AI.
4Peter Wildeford2y
Yes, I think all the things you mentioned are projects that are "within the scope" of RP (not that we would necessarily do them). We see our scope as being very broad so that we can always do the highest impact projects.
Thanks, that's interesting to hear. I guess that the mission statement is broad enough to allow it :) I have some concerns about this approach, mostly as it relates to developing research and organizational expertise, and possibly discouraging the creation of new research organizations. However, I'm sure that these kinds of considerations go into your case-by-case decision-making process and I imagine that these problems would only be crucial when EA and RP scales-up and matures more.
Hi Edo, Could you expand a bit on what you mean by prioritization research? Do you mean something like [https://forum.effectivealtruism.org/tag/cause-prioritization] "efforts to find the most important causes to work on and compare interventions across different areas, so that we can do as much good as possible with the resources available to us"? If so, how narrowly do you intend "causes" to be interpreted? E.g., would you count research that informs how much to prioritise technical AI safety work vs AI governance work? Or only research that informs decisions like how much to prioritise AI risk vs biorisk? Or only research that informs decisions like how much to prioritise longtermism vs near-termist animal welfare? (I think this is a good question, btw! I just feel like it could go in a few different directions depending on how it's intended/operationalised.)
Thanks for asking for clarification. I intended something wide that includes everything from, say, ranking interventions through cause prioritization to global priorities research and basic research that aims at improving practical prioritization making.

How does your cooperation with other prioritization research groups look like? What do you think are the biggest bottlenecks in prioritization research as a field?

6Peter Wildeford2y
I'm not sure what other groups you have in mind, but I'll answer this with regard to longtermism-oriented EA-affiliated research groups. We've collaborated a lot with the Future of Humanity Institute and the Forethought Foundation and have even shared staff and research projects with them on occasion. We have also talked some with people at Global Priorities Institute and other organizations. I'd guess right now the biggest bottleneck is just finding ways to get more researchers working on these most important questions. There's a lot more talent out there than there are spots open. More funding would help, but we also need more management and mentorship capacity. I'm optimistic that our internship program will be a help for this, but it is still funding constrained.

How's having two executive directors going?

I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization. Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).
8Peter Wildeford2y
I also think having a co-Executive Director is great. As Marcus said, we complement each other very well -- Marcus is more meticulous and detail-oriented than me, whereas I tend to be more "visionary". I definitely think we need both. We also share responsibilities and handle disagreements very well, and we have a trusted tie-breaking system. We've thought a few times about whether this merits splitting into CEO / COO or something similar and it hasn't ever made as much sense as our current system.

In the following comment, Marcus wrote:

One very simplistic model you can use to think about possible research projects in this area is:

  1. Big considerations (classically "crucial considerations", i.e. moral weight, invertebrate sentience)
  2. New charities/interventions (presenting new ideas or possibilities that can be taken up)
  3. Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)

It's far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1)

... (read more)
4Peter Wildeford2y
Hey EdoArad, it looks like you posted a lot of these questions twice and the questions have been answered elsewhere.Here are some answers to the questions I don't think were posted twice: ~ We do not currently have a model for that. ~ In the case of invertebrate sentience, our audience would be the existing EA-aligned animal welfare movement and big funders, such as Open Philanthropy and the EA Animal Welfare Fund. I hope that if we can demonstrate the cause area is viable and tractable, we might be able to find new funding opportunities and start moving money to them. The EA Animal Welfare Fund has already started giving money to some invertebrate welfare projects this year and I think our research was a part of those decisions.
Thanks for the answer! I find it interesting that the intended audience is internal to EA. (And sorry about the duplicates - fixed now )
2Peter Wildeford2y
Yeah, our broader theory of change is mostly (but not entirely) based on improving the output of the EA movement, and having the EA movement push out from there.