Definitely mostly using it to mean focused on x-risk, but most because that seems like the largest portion / biggest focus area for the community.
I interpret that Will MacAskill quote as saying that even the most hardcore longtermists care about nearterm outcomes (which seems true), not that lead reduction is supported from a longtermist perspective. I think it's definitely right that most longtermists I meet are excited about neartermist work. But I also think that the social pressures in the community currently still push toward longtermism.
To be clear, ... (read more)
I think something you raise here that's really important is that there are probably fairly important tensions to explore between the worlds that having a neartermist view and longtermist view suggest we ought to be trying to build, and that tension seems underexplored in EA. E.g. an inherent tension between progress studies and x-risk reduction.
I mean, my personal opinion is that is there was a concerted effort of maybe 30-50 people over ~2015-2020, the industry could have been set back fairly significantly. Especially strong levers here seem to be around convincing venture capital not to invest in the space, because VC money is going to fund the R&D necessarily to get insectmeal cost-competitive with fishmeal for the industry to succeed. But the VC firms seemed to be totally shooting in the dark during that period on whether or not this would work, so I think plausibly a pretty small effort ... (read more)
Yeah that's fair - there are definitely people who take them seriously in the community. To clarify, I meant my comment as person-affecting views seem pretty widely dismissed in the EA funding community (though probably the word "universally" is too strong there too.).
That doesn't seem quite right - negative utilitarians would still prefer marginal improvements even if all suffering didn't end (or in this case, a utilitarian might prefer many become free even if all didn't become free). The sentiment is interesting because it doesn't acknowledge marginal states that utilitarians are happy to compare against ideal states, or worse marginal states.
Yeah, I think that some percentage of this problem is fixable, but I think one issue is that there are lots of important critiques that might be made from a place of privileged information, and filling in a form will be deanonymizing to some extent. I think this is especially true when an actor's actions diverge from stated values/goals — I think many of the most important critiques of EA that need to be made come from actions diverging from stated values/goals, so this seems hard to navigate. E.g. I think your recent criminal justice reform post is a pret... (read more)
Thanks for the response!
RE 5d chess - I think I've experienced this a few times at organizations I've worked with (e.g. multiple funders saying, "we think its likely someone else will fund this, so are not/only partially funding it, though we want the entire thing funded," and then the project ends up not fully funded, and the org has to go back with a new ask/figure things out. This is the sort of interaction I'm thinking of here. It seems costly for organizations and funders. But I've got like an n=2 here, so it might just be chance (though one person at... (read more)
Yeah that makes sense to me. To be clear, the fact that two smart people have told me that they disagree with my sense that moral realism pushes against consistency seems like good evidence that my intuitions shouldn't be taken too strongly here.
I definitely agree with this. Here are a bunch of ideas that are vaguely in line with this that I imagine a good critique could be generated from (not endorsing any of the ideas, but I think they could be interesting to explore):
Yeah those are fair - I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.
Some quick things I imagine this could impact on the intra-longtermist side:
That's interesting and makes sense — for reference I work in EA research, and I'd guess ~90%+ of the people I regularly engage with in the EA community are really interested / excited about EA ideas. But that percentage is heavily influenced by the fact that I work at an EA organization.
Thanks for sharing these! It looks like this list ends at H (with some Ls at the beginning). I was wondering if it got cut off, or if that's coincidental?
My spouse shared this view when reading a draft of this post, which I found interesting because my intuitions went somewhat strongly the other way.
I don't really have strong views here, but it seems like are three possible scenarios for realists:
And in 2/3 of those, this problem might exist, so I leaned toward saying that this was an issue for reali... (read more)
I'd be interested in a survey on this.
My impression is that realism isn't a majority view among EAs, but is way higher than the general non-religious public / greater tech and policy communities that lots of EAs come out of.
Though I think this is something I want to see critiqued regardless of realist-ness.
I think I agree with everything here, though I don't think the line is exactly people who spend lots of time on EA Twitter (I can think of several people who are pretty deep into EA research and don't use Twitter/aren't avid readers of the Forum). Maybe something like, people whose primary interest is research into EA topics? But it definitely isn't everyone, or the majority of people into EA.
It probably depends on the area, but probably non-welfare related impact is going to vary by industry significantly. E.g. I imagine that insecticide use has fairly substantial environmental impacts, but that residential insecticides do not. I haven't looked into this at all, but I'd guess there are many ways in which these industries are bad and also good (they all exist because they provide some useful benefit) besides the welfare implications.
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this - I don't think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are... (read more)
Thanks for sharing this! I think that it is tough that the experiences you list are shared by many other people with ops experience. I also think that something I've witnessed at a lot of organizations is that growth can be somewhat stumbling - e.g. new non-ops staff are added until ops is overwhelmed, and only then are ops staff added.
To mildly shamelessly plug my own employer, Rethink Priorities has been really focusing on offsetting some of these challenges, including doing things like:
Hey Charles!
Sure thing! I am really excited about this position. I think the main motivation is that there are a lot of things where it seems like there ought to be summaries of the evidence for what the best practice is on an operational question, but there just isn't good information out there. So, we're hoping that some combination of literature review and self-experimentation can help us ensure we are operating efficiently and intelligently as we grow.
In response to your specific thoughts:
Hey!
We set the title level for the Special Projects Associate roles for a few reasons:
I think it is likely that if someone came in who had a fairly deep background in operations relevant to these roles, we'd basically evaluate them for a different title level on an individual basis.
I think we'd al... (read more)
Thanks! We are happy to be a good place to work and will keep that idea in mind for the future.
Sorry to callously steal your thunder Peter!
I know this question wasn't directed at me, but my impression was that we had a lot of people do the training and many also read the book, and most came away thinking that the training was not worth the time / covered a lot of the material in the book but in a less useful format.
That being said, I think it's possible that having all managers just being in a situation where they sit and think about good management practices for 3 days can be really helpful, even if the feeling of being there is negative / the training itself is bad, and I wouldn't be surprised if having a large number of people go through the training improved management at RP overall.
Yeah that makes sense to me - RP definitely is at an advantage in being able to recruit people interested in tons of different topics, and they might still be value aligned? I'd say that we've gotten some very good longtermism focused ops candidates, but maybe not proportional to the number of jobs in EA? Not sure though. I think remote work really factors heavily - most of the organizations mentioned in this thread as having open positions that they are struggling to fill aren't hiring remotely, and are just hiring in the Bay Area it looks like.
Looking at other comments here, it seems like more people share your thought. I think maybe the remote/non-remote line is still important. But given that other ops people perceive a bottleneck, I added a note to my answer that I don't think it's really accurate.
Yeah, I think it sounds like people are saying that there is a lack of executive-level talent, which makes sense and seems reasonable - if EA is growing, there are going to be more Executive-y jobs than people with that experience in EA already, so if value-alignment is critical, this will be an issue.
But, I guess to me, it seems odd to use "ops" to mostly refer to high-level roles at organizations / entrepreneurial opportunities, which aren't the vast majority of jobs that might traditionally be called ops jobs. I definitely don't think founding an organi... (read more)
Edit: Given the other answers here it seems like there probably is a higher unmet demand for ops roles than I suggest here, so I don't think this comment should be the top answer here. I think my comments below might still be helpful for indicating why we and some other organizations have had less trouble hiring for ops than other organizations, but it seems like a bunch of groups are struggling to hire for ops.
I've hired operations people for EA-aligned organizations both during the period that 80,000 Hours had ops as a priority area and after.
Some ... (read more)
Maybe it's easier in effective animal advocacy, because there's a broader animal advocacy movement to draw from and some large animal advocacy orgs building talent? Also, EAs seem to disproportionately have STEM backgrounds and want to do research, but this is probably not the case for animal advocates in general, so the proportion of animal advocates with ops skills may be higher than for EAs.
These are great thoughts, thanks! We definitely have different perceptions, but I really appreciate this perspective.
One crux may be what CarolineJ points to in her comment: "ops" captures a continuum of skillsets, some of which seem much rarer and more urgently needed than others. I am not sure what roles you were hiring for at your orgs, but I agree with CarolineJ that we especially need those with "chief of staff"-type skills. Examples that come to mind are Zach Robinson (Chief of Staff at Open Philanthropy) and Bill Zito (co-founder and COO at Redwood ... (read more)
I don’t know if I buy any specific theory of change as being particularly useful, but my impression is most people in the animal welfare world are working under something like scenarios 1, 3, or 4 on your list, but not in any deeper detail than you have here. It also doesn’t seem like you have to have a Theory of Victory if you think corporate campaigning is highly cost-effective and otherwise making progress on animal welfare issues is hard.
The closest thing I’ve seen to something explicit and detailed is DxE’s Roadmap to Animal Liberation - https://docs.google.com/document/d/1YN7KpuShiZItqVuQtWv6ykrjrNv6rAnmjVOcsofRj0I/
Here are roles Rethink Priorities has hired for since 2020. There hasn't been any real trend as far as I can see, except that my subjective impression is that the number of highly qualified applicants for research roles and operations roles is up, suggesting that it is getting harder to get a job at RP.
Our most competitive hiring round was for an Operations Associate a few months ago. Our researcher roles are in specific cause areas, so it's hard to compare directly to when we hired general researchers, but my impression is that they are up. We consistentl... (read more)
This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).
If you're donating on our website (https://rethinkpriorities.org/donate), on the second part of the donate form, you can add a comment. Just add a note there if you'd like us to restrict your gift to a specific pool - our finance team sees these notes.
If you're giving via another platform (EA Funds, a DAF, etc.) feel free to just email us at info@rethinkpriorities.org and let us know!
Thanks for supporting us!
This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.
But in terms of just pure contracted work, in 2021 so far, we've received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.
In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we'd see growth growth in donations and grants too.
I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that's probably helped us maintain a high quality of output as we grow.
I also think a huge part of RP's success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.
So there are a bunch of questions in this, but I can answer some of the ops related one:
I have private information (e.g. from senior people at Rethink Priorities and former colleagues) that suggests operations ability at RP is unusually high. They say that Abraham Rowe, COO, is unusually good.
The reason why this comment is useful is that:
Here's some parts of my personal take (which overlaps with what Abraham said):
I think we ourselves feel a bit unsure "why we're special", i.e. why it seems there aren't very many other EA-aligned orgs scaling this rapidly & gracefully.
But my guess is that some of the main factors are:
It's a little hard to say because we don't necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It's varied by year (we've received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).
One tricky part of separating this out - there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.
This is correct - the RFMF is how much we think we'd like to raise between now and the end of 2022 to spend in 2022 and 2023 according the budgets above.
Edit: This looks like it is be wrong - the oldest reference I found on the EA Forum to it is explicitly the biology one: https://forum.effectivealtruism.org/posts/WAhFnueRgHkAf8KHc/making-ea-groups-more-welcoming.
My guess would be that people have accidentally swapped "founder's syndrome" with "founder effects." Founder's syndrome is widely used outside EA to refer to the things people are talking about: https://en.wikipedia.org/wiki/Founder's_syndrome. EA seems to use it to refer to a wider range of things, but this seems more likely than people int... (read more)
It seems pretty bizarre to me to say that these historical examples are not at all relevant for evaluating present day social movements. I think it's incredibly important that socialists, for example, reflect on why various historical folks and states acting in the name of socialism caused mass death and suffering, and likewise for any social movement look at it's past mistakes, harms, etc., and try to reevaluate their goals in light of that.
To me, the examples you give just emphasize the post's point — I think it would be hard to find someone who di... (read more)
It’s definitely the case that we can hire people in most countries (though some countries have additional considerations we have to account for, like whether the person has working hours that will overlap with their manager, some financial / logistical constraints, etc), and we are happy to review any candidate’s specific questions about their particular location on a case by case basis if folks want to reach out to info@rethinkpriorities.org. For reference, we currently have staff in the US, Canada, Mexico, Spain, UK, Switzerland, Germany, and New Zealand.
I think this is likely true for animal welfare too. For example, looking at animal welfare organizations funded by Open Phil, and thinking about my own experience working at/with groups funded by them, I'd guess that under 10% of employees at a lot of the bigger orgs (THL, GFI) engage with non-animal EA content at all, and a lot fewer than that fill out the EA survey.
Here are some ideas that I think would be useful (or at least, I would definitely read), from first to last in order of how excited I would be to read them:
For what it's worth, I think there is a good case to be made that WAI is somewhere between a neartermist and longtermist organization (mediumtermist?) — e.g. this research and similar seem to be from a relatively longtermist perspective. Though I'm biased because I know that I am sympathetic to some aspects of a longtermist worldview (though obviously no longer work there), and that several of the staff there are also somewhat sympathetic to longtermism. These views might be separated from the work of the organization. And they received around 25% of the t... (read more)
Hi, most of the annual production information came from a combination of market research, industry publications, and estimates I built myself - the first part of the Methods section details this and links to sources when available: https://forum.effectivealtruism.org/posts/ruFmR5oBgqLgTcp2b/insects-raised-for-food-and-feed-global-scale-practices-and#Methods
If you had to make some predictions about what the animal advocacy space will look like in 20 years, what would be different from today?
How do you go about evaluating a grant for research vs. a grant that supports direct work?
We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to -5, with +5 being the strongest possible endorsement of positive impact, and -5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grant... (read more)
This is a lot more invertebrate welfare work than has been ever supported in the EA space than before (as far as I can tell).
It looks like most of these grants fall into a few categories:
This seems good since many groups recommended in the EA space seem to be in the US and Europe (GFI, Albert Schweitzer, Anima, etc.), so I imagine these other opportunities are especially neglected. The exception to this are the grants you made to THL UK and OBRAZ. I'd be interested in what makes these two groups such good opportunities compared to the charities typically recommended that work in the US / Europe?
Right now it seems like there are some really promising but risky opportunities for the EA AWF (e.g. all of insect and invertebrate stuff this grant cycle). How do you evaluate some of these more speculative or high-risk / high-return grants vs. something like corporate chicken campaigns in a neglected region, or an ACE top charity in a neglected space (e.g. Wild Animal Initiative)?
That makes sense to me.
Yeah, I definitely think that also many people from left-leaning spaces who come to EA also become sympathetic to suffering focused work in my experience, which also seems consistent with this.