Edited to remove my comment since it is off topic. I'm happy to talk about this though if people want to in other contexts! I definitely think this is a pretty important question, and looking into how fiscal sponsorship arrangements are working in reality is important, as I imagine there is high variance in how effective oversight mechanisms are (though I think RP has done this well).
Hi,
(writing as the COO of Rethink Priorities).
Nonlinear is not, and has never been fiscally sponsored by Rethink Priorities. RP has never had a legal or financial connection to Nonlinear.
In the grant round you cite, it looks like the receiving charity is listed as Rethink Charity. RP was fiscally sponsored by RC until 2020, but is no longer legally connected to RC. RC is a separate legal entity with a separate board. RP and RC do not have a legal connection anymore, and have not since 2020.
Not Peter, but looking at the last ~20 roles I've hired for, I'd guess that during hiring, maybe 15 or so had an alternative candidate who seemed worth hiring (though perhaps did worse in some scoring system). These were all operations roles within an EA organization. For 2 more senior roles I hired for during that time, there appeared to be suitable alternatives. For other less senior roles there weren't (though I think the opposite generally tends to be more true).
I do thing one consideration here is we are talking about who looked best during hiring. Th...
The main out of context bit is that Elizabeth's comment seemed to interpret Marcus as only referring to salary, when the full comment makes it very clear that it wasn't just about that, which seemed like a strong misreading to me, even if the 10x factor was incorrect.
I suspect the actual "theoretically maximally frugal core EA organization with the same number of staff" is something like 2x-3x cheaper than current costs, if salaries moved to the $50k-$70k range.
That doesn't seem quite right to me - Longview and EG don't strike me as being earning to give outreach, though they definitely bring funds into the community. And Founders Pledge is clearly only targeting very large donors. I guess maybe to be more specific, nothing like massive, multi-million dollar E2G outreach has been tried for mid-sized / every day earning to give, as you're definitely right that effort has gone into bringing in large donors.
I think that one thing I reflect on is how much money has been spent on EA community building over the last 5 years or so. I'm guessing it is several 10s of millions of dollars. My impression (which might not be totally right) is that little of that went to promoting earning to give. So it seems possible that in a different world, where a much larger fraction was used on E2G outreach, we could have seen a substantial increase. The GWWC numbers are hard to interpret, because I don't think anything like massive, multi-million dollar E2G outreach has been tri...
Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what's funded, and this reason is one of the better cases why it would.
I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.
Minor downvoted because this comment seems to take Marcus's comment out of context / misread it:
Catered lunches, generous expense policies, large benefits packages and ample + flexible + paid time off become a pot luck once a week, basic healthcare coverage and 2 weeks of vacation. All of a sudden, running a 10 person organization takes $1M instead of $10M and it becomes much more feasible to get 30 x $10-30k with a couple of 50-100k donations to cover the cost of the organization.
I don't think the numbers are likely exactly right, but I think the broad po...
Downvoted because this comment seems to take Marcus's comment out of context / misread it ... I don't think the numbers are likely exactly right, but I think the broad point is correct.
I think it depends a lot on whether you think the difference between 10x ($10M vs $1M) and 1.4x (30% savings) is a big deal? (I think it is)
FWIW, my experience (hiring mostly operations roles) is often the opposite - I find for non-senior roles that I usually reach the end of a hiring process, and am making a pretty arbitrary choice between multiple candidates who both seem quite good on the (relatively weak) evidence from the hiring round. But, I also think RP filters a lot less heavily on culture fit / value alignment for ops roles than CEA does, which might be the relevant factor making this difference.
FWIW, I mildly disagree with this, because a major part of the appeal of donation elections stuff (if done well) is that the results more closely model a community consensus than other giving mechanisms, and being able to donate votes would distort that in some sense. I think I don't see the appeal of being able to donate votes in this context over just telling Jenifer + Alan that they can control where one donates to some extent, or donating to a fund. Or, if not donating to the election fund, just asking Jenifer + Alan for their opinion and changing your own mind accordingly.
I think since there can be multiple winners, letting people vote on the ideal distribution then averaging those distributions would be better than direct voting, since it most directly represents "how voters think the funds should be split on average" or similar, which seems like what you want to capture? And also is still very understandable I hope.
E.g. if I think 75% of the pool should go to LTFF and 20% to GiveWell, and 5% to the EA AWF, 0% to all the rest, I vote 75%/20%/5%/0%/0%/0% etc. Then, you take the average of those distributions across all vote...
Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF i...
Thanks for adding this feature!
I am also interested in how this is structured from a licensing perspective - this is relevant for content posted with permission, but not owned by the original poster (which is relevant in some cases I'm looking into), and also for people's old content generally. Would the Forum team be able to clarify who owns the audio versions, and what the licensing on pieces posted prior to the current terms of use was? My impression was that they were owned by the authors, but I can't find any records of the terms of use prior to this ...
I'm curious why people downvoted this comment! (when I posted this, it was at 0 with four votes, and I strongly upvoted it to 7). I think it is an important question and is currently unanswered. For reference on its importance — it's directly relevant to me in a context related to doing my work for an EA organization, and in particular trying to catalogue historic IP.
I'm not sure about the academic literature, but will add anecdotally that my impression is that the PTC hypothesis is extremely widespread within the advocacy space - people talk about it a ton.
I'll also add that the "necessary but not sufficient" line feels hard to interpret without more clarification (and a bit meaningless on its own because of this). It would be helpful if people pushing this position could clarify how much of the effort PTC is doing to reach sufficiency. E.g. if one thinks that if we reach PTC parity, and its done like 90% of the work ...
Thanks for the question!
Across its lifetime, RP has spent around: $13,976,000.
In terms of FTE-years, RP staff have completed around 95 to 100, and we've funded external collaborators for another 55 to 60, so I'd estimate that in total, the input was something like 150 to 160 FTE-years of work.
Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.
I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.
...The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest
If you're still interested in joining Rethink Priorities' board, we've extended the deadline to submit an application to January 20th. We'd love to hear from you by then! Apply today.
Not sure if it is active anymore, but there is a longstanding hub for EAs to do this: https://donationswap.eahub.org/
I've noticed that it takes new orgs up to a year to show up in that search, so it might also be that they've applied for or gotten the status recently (given that FTX stuff was so new). Delaware corporation search suggests they are registered as a nonprofit corporation in Delaware - https://icis.corp.delaware.gov/ecorp/entitysearch/NameSearch.aspx, (have to search them by name).
Unfortunately not! We use Greater Wrong because we can do an RSS feed for a specific tag for the forum. E.g., we have a communications Slack channel where any post made and tagged "Rethink Priorities" is automatically posted using an RSS feed.
This isn't really that big a deal for us - I just thought I'd mention it here :)
This is minor, and probably not relevant to most people, but my work (Rethink Priorities) would definitely use an RSS feed version of the Forum so we can get notifications of when things with certain tags are posted in Slack. I think we could do this now with an account / notifications to email / email to Slack, but instead are using Greater Wrong for now for simplicity (e.g. this feed goes to our comms Slack channel) https://ea.greaterwrong.com/topics/rethink-priorities?format=rss). Thanks for all you do!
Yeah, I agree with this entirely. I think that probably most good critiques should result in a change, so just talking about doing that change seems promising.
That makes sense to me.
Yeah, I definitely think that also many people from left-leaning spaces who come to EA also become sympathetic to suffering focused work in my experience, which also seems consistent with this.
Definitely mostly using it to mean focused on x-risk, but most because that seems like the largest portion / biggest focus area for the community.
I interpret that Will MacAskill quote as saying that even the most hardcore longtermists care about nearterm outcomes (which seems true), not that lead reduction is supported from a longtermist perspective. I think it's definitely right that most longtermists I meet are excited about neartermist work. But I also think that the social pressures in the community currently still push toward longtermism.
To be clear, ...
I think something you raise here that's really important is that there are probably fairly important tensions to explore between the worlds that having a neartermist view and longtermist view suggest we ought to be trying to build, and that tension seems underexplored in EA. E.g. an inherent tension between progress studies and x-risk reduction.
Yeah that's fair - there are definitely people who take them seriously in the community. To clarify, I meant my comment as person-affecting views seem pretty widely dismissed in the EA funding community (though probably the word "universally" is too strong there too.).
That doesn't seem quite right - negative utilitarians would still prefer marginal improvements even if all suffering didn't end (or in this case, a utilitarian might prefer many become free even if all didn't become free). The sentiment is interesting because it doesn't acknowledge marginal states that utilitarians are happy to compare against ideal states, or worse marginal states.
Yeah, I think that some percentage of this problem is fixable, but I think one issue is that there are lots of important critiques that might be made from a place of privileged information, and filling in a form will be deanonymizing to some extent. I think this is especially true when an actor's actions diverge from stated values/goals — I think many of the most important critiques of EA that need to be made come from actions diverging from stated values/goals, so this seems hard to navigate. E.g. I think your recent criminal justice reform post is a pret...
Thanks for the response!
RE 5d chess - I think I've experienced this a few times at organizations I've worked with (e.g. multiple funders saying, "we think its likely someone else will fund this, so are not/only partially funding it, though we want the entire thing funded," and then the project ends up not fully funded, and the org has to go back with a new ask/figure things out. This is the sort of interaction I'm thinking of here. It seems costly for organizations and funders. But I've got like an n=2 here, so it might just be chance (though one person at...
Yeah that makes sense to me. To be clear, the fact that two smart people have told me that they disagree with my sense that moral realism pushes against consistency seems like good evidence that my intuitions shouldn't be taken too strongly here.
I definitely agree with this. Here are a bunch of ideas that are vaguely in line with this that I imagine a good critique could be generated from (not endorsing any of the ideas, but I think they could be interesting to explore):
Yeah those are fair - I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.
Some quick things I imagine this could impact on the intra-longtermist side:
That's interesting and makes sense — for reference I work in EA research, and I'd guess ~90%+ of the people I regularly engage with in the EA community are really interested / excited about EA ideas. But that percentage is heavily influenced by the fact that I work at an EA organization.
Thanks for sharing these! It looks like this list ends at H (with some Ls at the beginning). I was wondering if it got cut off, or if that's coincidental?
My spouse shared this view when reading a draft of this post, which I found interesting because my intuitions went somewhat strongly the other way.
I don't really have strong views here, but it seems like are three possible scenarios for realists:
And in 2/3 of those, this problem might exist, so I leaned toward saying that this was an issue for reali...
I'd be interested in a survey on this.
My impression is that realism isn't a majority view among EAs, but is way higher than the general non-religious public / greater tech and policy communities that lots of EAs come out of.
Though I think this is something I want to see critiqued regardless of realist-ness.
I think I agree with everything here, though I don't think the line is exactly people who spend lots of time on EA Twitter (I can think of several people who are pretty deep into EA research and don't use Twitter/aren't avid readers of the Forum). Maybe something like, people whose primary interest is research into EA topics? But it definitely isn't everyone, or the majority of people into EA.
It probably depends on the area, but probably non-welfare related impact is going to vary by industry significantly. E.g. I imagine that insecticide use has fairly substantial environmental impacts, but that residential insecticides do not. I haven't looked into this at all, but I'd guess there are many ways in which these industries are bad and also good (they all exist because they provide some useful benefit) besides the welfare implications.
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this - I don't think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are...
Thanks for sharing this! I think that it is tough that the experiences you list are shared by many other people with ops experience. I also think that something I've witnessed at a lot of organizations is that growth can be somewhat stumbling - e.g. new non-ops staff are added until ops is overwhelmed, and only then are ops staff added.
To mildly shamelessly plug my own employer, Rethink Priorities has been really focusing on offsetting some of these challenges, including doing things like:
Hey Charles!
Sure thing! I am really excited about this position. I think the main motivation is that there are a lot of things where it seems like there ought to be summaries of the evidence for what the best practice is on an operational question, but there just isn't good information out there. So, we're hoping that some combination of literature review and self-experimentation can help us ensure we are operating efficiently and intelligently as we grow.
In response to your specific thoughts:
Hey!
We set the title level for the Special Projects Associate roles for a few reasons:
I think it is likely that if someone came in who had a fairly deep background in operations relevant to these roles, we'd basically evaluate them for a different title level on an individual basis.
I think we'd al...
The paper wasn't trying to assess insect sentience, but was evaluating welfare considerations for crickets due to the potential risk of cricket sentience from a precautionary principle perspective. So it doesn't go into detail on cricket sentience, and primarily refers to this paper as a primer on why we might take insect pain as a potential reality.
For a more thorough background on insect sentience, I recommend Rethink Priorities Invertebrate Sentience series, and Moral Weight Project (though neither looked at crickets specifically).