All of abrahamrowe's Comments + Replies

The paper wasn't trying to assess insect sentience, but was evaluating welfare considerations for crickets due to the potential risk of cricket sentience from a precautionary principle perspective. So it doesn't go into detail on cricket sentience, and primarily refers to this paper as a primer on why we might take insect pain as a potential reality.

For a more thorough background on insect sentience, I recommend Rethink Priorities Invertebrate Sentience series, and Moral Weight Project (though neither looked at crickets specifically).

1
David_R
13d
Thanks for the recommendations!
4
Jackson Wagner
17d
Yeah, I wondered what threshold to set things at -- $10m is a pretty easy bar for some of these areas, since of course some of my listed cause areas are more niche / fringe than others. I figure that for the highest-probability markets, where $10m is considered all but certain, maybe I can follow up with a market asking about a $50m or $100m threshold. I agree that $10m isn't "mainstream" in the sense of joining the pantheon alongside biosecurity, AI safety, farmed animal welfare, etc. But it would still be a big deal to me if, say, OpenPhil doubled their grantmaking to "land use" and split the money equally between YIMBYism and Georgism. Or if mitigating stable totalitarianism risk got as much support as "progress studies"-type stuff. $10m of grants towards studying grabby aliens or the simulation hypothesis or etc would definitely be surprising!

Edited to remove my comment since it is off topic. I'm happy to talk about this though if people want to in other contexts! I definitely think this is a pretty important question, and looking into how fiscal sponsorship arrangements are working in reality is important, as I imagine there is high variance in how effective oversight mechanisms are (though I think RP has done this well).

Hi,

(writing as the COO of Rethink Priorities).

Nonlinear is not, and has never been fiscally sponsored by Rethink Priorities. RP has never had a legal or financial connection to Nonlinear.

In the grant round you cite, it looks like the receiving charity is listed as Rethink Charity. RP was fiscally sponsored by RC until 2020, but is no longer legally connected to RC. RC is a separate legal entity with a separate board. RP and RC do not have a legal connection anymore, and have not since 2020.

4
Rockwell
3mo
@abrahamrowe, I'm curious if you have insights on the larger point about good governance across the EA ecosystem. As evidenced by EV's planned disbanding, sponsorship arrangements have a higher potential to become fraught. The opacity of the relationship between Rethink Charity and Nonlinear might be another example. (I.e. This is further indication Nonlinear employees wouldn't have had the same protection and recourse mechanisms as employees of more conventionally governed 501c3s, especially those of established 501c3s sizeable enough to hire 21 staff members.) Given RP is growing into one of the larger fiscal sponsors through your Special Projects Team, it might be worth further commentary from the RP team on how you're navigating risk and responsibility in sponsorship arrangements. Given RP's track record of proactive risk mitigation, I imagine you all have given this ample thought and it might serve as a template for others.
8
Habryka
3mo
Oops, sorry! I did indeed think that you were still part of the same legal structure! Correcting myself, then I guess the following is Nonlinear's board (the board of Rethink Charity)?  Will also update my other comment to correct for this.

Not Peter, but looking at the last ~20 roles I've hired for, I'd guess that during hiring, maybe 15 or so had an alternative candidate who seemed worth hiring (though perhaps did worse in some scoring system). These were all operations roles within an EA organization. For 2 more senior roles I hired for during that time, there appeared to be suitable alternatives. For other less senior roles there weren't (though I think the opposite generally tends to be more true).

I do thing one consideration here is we are talking about who looked best during hiring. Th... (read more)

The main out of context bit is that Elizabeth's comment seemed to interpret Marcus as only referring to salary, when the full comment makes it very clear that it wasn't just about that, which seemed like a strong misreading to me, even if the 10x factor was incorrect.

I suspect the actual "theoretically maximally frugal core EA organization with the same number of staff" is something like 2x-3x cheaper than current costs, if salaries moved to the $50k-$70k range.

9
Jeff Kaufman
4mo
I see Elizabeth as saying "all expenses are staff expenses", which is broader than salary and includes things like office and food? I think why Elizabeth was pointing out that the calculation implies only staff costs contribute to the overall budget because if you have expenses for things other than staff (ex: in my current org we spend money on lab reagents and genetic sequencing, which don't go down if we decide to compensate more frugally) then your overall costs will drop less than your staff costs.

That doesn't seem quite right to me - Longview and EG don't strike me as being earning to give outreach, though they definitely bring funds into the community. And Founders Pledge is clearly only targeting very large donors. I guess maybe to be more specific, nothing like massive, multi-million dollar E2G outreach has been tried for mid-sized / every day earning to give, as you're definitely right that effort has gone into bringing in large donors.

3
calebp
4mo
I guess I don’t really know what you have in mind. I’m particularly confused as to what definition of outreach you’re using if it excludes Longview. Gwwc almost certainly has an annual budget of over $1m per year and is also pretty clearly investing in e2g. I don’t think there’s much e2g as a career stuff since 80k pivoted away from that message - but that’s obviously much narrower than e2g outreach.

I think that one thing I reflect on is how much money has been spent on EA community building over the last 5 years or so. I'm guessing it is several 10s of millions of dollars. My impression (which might not be totally right) is that little of that went to promoting earning to give. So it seems possible that in a different world, where a much larger fraction was used on E2G outreach, we could have seen a substantial increase. The GWWC numbers are hard to interpret, because I don't think anything like massive, multi-million dollar E2G outreach has been tri... (read more)

3
tobytrem
4mo
Thanks, this makes sense. I also have the impression that E2G was deprioritised. In some cases, I've seen it actively spoken against (in the vein of "I guess that could be good, but it is almost no one's most impactful option"), mostly as pushback to the media impression that EA=E2G. I also see the point that more diversified funding seems good on the margin- to give organisations more autonomy and security.  One thing that isn't mentioned in this piece is the risks that come from relying on a broader base of donors. If the EA community would need to be much, much larger in order to support the current organisations with more diversified funding, how would this change the way the organisations acted? In order to keep a wider range of funders happy, would there be pressures to appeal to a common denominator? Would it in fact become harder to service and maintain relations with a wider range of donors? I presume that the ideal is closer to the portfolio you sketch for an organisation like RP in this comment. 
3
calebp
4mo
I’m pretty sure that GWWC, Founders Pledge, Longview and Effective Giving all have operating costs of over $1M per year. It seems like quite a lot of effort has gone into testing the e2g growth hypothesis.

Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what's funded, and this reason is one of the better cases why it would.

I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.

3
Neel Nanda
4mo
Yeah, that intermediate world sounds great to me! (though a lot of effort, alas)

Minor downvoted because this comment seems to take Marcus's comment out of context / misread it:

Catered lunches, generous expense policies, large benefits packages and ample + flexible + paid time off become a pot luck once a week, basic healthcare coverage and 2 weeks of vacation. All of a sudden, running a 10 person organization takes $1M instead of $10M and it becomes much more feasible to get 30 x $10-30k with a couple of 50-100k donations to cover the cost of the organization.

I don't think the numbers are likely exactly right, but I think the broad po... (read more)

Downvoted because this comment seems to take Marcus's comment out of context / misread it ... I don't think the numbers are likely exactly right, but I think the broad point is correct.

I think it depends a lot on whether you think the difference between 10x ($10M vs $1M) and 1.4x (30% savings) is a big deal? (I think it is)

FWIW, my experience (hiring mostly operations roles) is often the opposite - I find for non-senior roles that I usually reach the end of a hiring process, and am making a pretty arbitrary choice between multiple candidates who both seem quite good on the (relatively weak) evidence from the hiring round. But, I also think RP filters a lot less heavily on culture fit / value alignment for ops roles than CEA does, which might be the relevant factor making this difference.

Yeah definitely - that's a more elegant way.

FWIW, I mildly disagree with this, because a major part of the appeal of donation elections stuff (if done well) is that the results more closely model a community consensus than other giving mechanisms, and being able to donate votes would distort that in some sense. I think I don't see the appeal of being able to donate votes in this context over just telling Jenifer + Alan that they can control where one donates to some extent, or donating to a fund. Or, if not donating to the election fund, just asking Jenifer + Alan for their opinion and changing your own mind accordingly.

I think since there can be multiple winners, letting people vote on the ideal distribution then averaging those distributions would be better than direct voting, since it most directly represents "how voters think the funds should be split on average" or similar, which seems like what you want to capture? And also is still very understandable I hope.

E.g. if I think 75% of the pool should go to LTFF and 20% to GiveWell, and 5% to the EA AWF, 0% to all the rest, I vote 75%/20%/5%/0%/0%/0% etc. Then, you take the average of those distributions across all vote... (read more)

2
Kirsten
5mo
If we're thinking of it as "ideally I'd like 75% of the money to go here, 20% here, etc" we could just give people 100 votes each and give money to the top 3?

Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF i... (read more)

Thanks for adding this feature!

I am also interested in how this is structured from a licensing perspective - this is relevant for content posted with permission, but not owned by the original poster (which is relevant in some cases I'm looking into), and also for people's old content generally. Would the Forum team be able to clarify who owns the audio versions, and what the licensing on pieces posted prior to the current terms of use was? My impression was that they were owned by the authors, but I can't find any records of the terms of use prior to this ... (read more)

I'm curious why people downvoted this comment! (when I posted this, it was at 0 with four votes, and I strongly upvoted it to 7). I think it is an important question and is currently unanswered. For reference on its importance — it's directly relevant to me in a context related to doing my work for an EA organization, and in particular trying to catalogue historic IP.

I'm not sure about the academic literature, but will add anecdotally that my impression is that the PTC hypothesis is extremely widespread within the advocacy space - people talk about it a ton.

I'll also add that the "necessary but not sufficient" line feels hard to interpret without more clarification (and a bit meaningless on its own because of this). It would be helpful if people pushing this position could clarify how much of the effort PTC is doing to reach sufficiency. E.g. if one thinks that if we reach PTC parity, and its done like 90% of the work ... (read more)

2
Jacob_Peacock
7mo
(Abraham and I both work for Rethink Priorities.) I agree, especially with your points on "necessary but not sufficient." In my view, this represents mostly a pivot from the PTC hypothesis. I'm not sure whether to view this as post hoc hypothesizing (generally bad) or merely updating-on-evidence (generally good). I do think the question of "what percent of the 'work' is PTC?" is probably not well-defined, but is likely a worthwhile starting point for disagreement.

Thanks for the question!

Across its lifetime, RP has spent around: $13,976,000.

In terms of FTE-years, RP staff have completed around 95 to 100, and we've funded external collaborators for another 55 to 60, so I'd estimate that in total, the input was something like 150 to 160 FTE-years of work.

Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.

I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.

The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest

... (read more)
5
Grayden
1y
Thanks for this! You might be right about the non-profit vs. for-profit distinction in 'operations' and your point about the COO being 'Operating' rather than 'Operations' is a good one. Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I'd argue they probably shouldn't be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn't be managers.

If you're still interested in joining Rethink Priorities' board, we've extended the deadline to submit an application to January 20th. We'd love to hear from you by then! Apply today.

Not sure if it is active anymore, but there is a longstanding hub for EAs to do this: https://donationswap.eahub.org/

4
Jason
1y
I couldn't get the contact form to work, so tried the e-mail address. Some of the information looked dated, so I asked if they could use some assistance updating if this is still an active project. To the extent the maintainers need support, this seems like it could be pretty high-impact for the modest amount of work that should be needed to keep it up to date.

I've noticed that it takes new orgs up to a year to show up in that search, so it might also be that they've applied for or gotten the status recently (given that FTX stuff was so new). Delaware corporation search suggests they are registered as a nonprofit corporation in Delaware - https://icis.corp.delaware.gov/ecorp/entitysearch/NameSearch.aspx, (have to search them by name). 

Unfortunately not! We use Greater Wrong because we can do an RSS feed for a specific tag for the forum. E.g., we have a communications Slack channel where any post made and tagged "Rethink Priorities" is automatically posted using an RSS feed.

This isn't really that big a deal for us - I just thought I'd mention it here :)

This is minor, and probably not relevant to most people, but my work (Rethink Priorities) would definitely use an RSS feed version of the Forum so we can get notifications of when things with certain tags are posted in Slack. I think we could do this now with an account / notifications to email / email to Slack, but instead are using Greater Wrong for now for simplicity (e.g. this feed goes to our comms Slack channel) https://ea.greaterwrong.com/topics/rethink-priorities?format=rss). Thanks for all you do!

1
Sharang Phadke
1y
We do have a few different default RSS feeds, which you can find in the left sidebar. Does that meet your needs?

Yeah, I agree with this entirely. I think that probably most good critiques should result in a change, so just talking about doing that change seems promising.

That makes sense to me.

Yeah, I definitely think that also many people from left-leaning spaces who come to EA also become sympathetic to suffering focused work in my experience, which also seems consistent with this.

Definitely mostly using it to mean focused on x-risk, but most because that seems like the largest portion / biggest focus area for the community.

I interpret that Will MacAskill quote as saying that even the most hardcore longtermists care about nearterm outcomes (which seems true), not that lead reduction is supported from a longtermist perspective. I think it's definitely right that most longtermists I meet are excited about neartermist work. But I also think that the social pressures in the community currently still push toward longtermism.

To be clear, ... (read more)

I think something you raise here that's really important is that there are probably fairly important tensions to explore between the worlds that having a neartermist view and longtermist view suggest we ought to be trying to build, and that tension seems underexplored in EA. E.g. an inherent tension between progress studies and x-risk reduction.

Yeah that's fair - there are definitely people who take them seriously in the community. To clarify, I meant my comment as person-affecting views seem pretty widely dismissed in the EA funding community (though probably the word "universally" is too strong there too.).

That doesn't seem quite right - negative utilitarians would still prefer marginal improvements even if all suffering didn't end (or in this case, a utilitarian might prefer many become free even if all didn't become free). The sentiment is interesting because it doesn't acknowledge marginal states that utilitarians are happy to compare against ideal states, or worse marginal states.

2
Oliver Sourbut
2y
Got it, I think you're quite right on one reading. I should have been clearer about what I meant, which is something like * there is a defensible reading of that claim which maps to some negative utilitarian claim (without necessarily being a central example) * furthermore I expect many issuers of such sentiments are motivated by basically pretheoretic negative utilitarian insight E.g. imagine a minor steelification (which loses the aesthetic and rhetorical strength) like "nobody's positive wellbeing (implicitly stemming from their freedom) can/should be celebrated until everyone has freedom (implicitly necessary to escape negative wellbeing)" which is consistent with some kind of lexical negative utilitarianism. You're right that if we insist that 'freedom' be interpreted identically in both places (parsimonious, granted, though I think the symmetry is better explained by aesthetic/rhetorical concerns) another reading explicitly neglects the marginal benefit of lifting merely some people out of illiberty. Which is only consistent with utilitarianism if we use an unusual aggregation theory (i.e. minimising) - though I have also seen this discussed under negative utilitarianism. Anecdata: as someone whose (past) political background and involvement (waning!) is definitely some kind of lefty, and who, if it weren't for various x- and s-risks, would plausibly consider some form (my form, naturally!) of lefty politics to be highly important (if not highly tractable), my reading of that claim at least goes something like the first one. I might not be representative in that respect. I have no doubt that many people expressing that kind of sentiment would still celebrate marginal 'releases', while considering it wrong to celebrate further the fruits of such freedom, ignoring others' lack of freedom.

Yeah, I think that some percentage of this problem is fixable, but I think one issue is that there are lots of important critiques that might be made from a place of privileged information, and filling in a form will be deanonymizing to some extent. I think this is especially true when an actor's actions diverge from stated values/goals — I think many of the most important critiques of EA that need to be made come from actions diverging from stated values/goals, so this seems hard to navigate. E.g. I think your recent criminal justice reform post is a pret... (read more)

3
sapphire
2y
There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.

Thanks for the response!

RE 5d chess - I think I've experienced this a few times at organizations I've worked with (e.g. multiple funders saying, "we think its likely someone else will fund this, so are not/only partially funding it, though we want the entire thing funded," and then the project ends up not fully funded, and the org has to go back with a new ask/figure things out. This is the sort of interaction I'm thinking of here. It seems costly for organizations and funders. But I've got like an n=2 here, so it might just be chance (though one person at... (read more)

4
calebp
2y
I found this helpful and I feel like it resolved some cruxes for me. Thank you for taking the time to respond!

Yeah that makes sense to me. To be clear, the fact that two smart people have told me that they disagree with my sense that moral realism pushes against consistency seems like good evidence that my intuitions shouldn't be taken too strongly here.

I definitely agree with this. Here are a bunch of ideas that are vaguely in line with this that I imagine a good critique could be generated from (not endorsing any of the ideas, but I think they could be interesting to explore):

  • Welfare is multi-dimensional / using some kind of multi-dimensional analysis captures important information that a pure $/lives saved approach misses.
    • Relatedly, welfare is actually really culturally dependent, so using a single metric misses important features.
  • Globalism/neoliberalism are bad in the longterm for some variety of reas
... (read more)
-5
Oliver Sourbut
2y

Yeah those are fair - I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.

Some quick things I imagine this could impact on the intra-longtermist side:

  • Prioritization between x-risks that cause only human extinction vs extinction of all/most life on earth (e.g. wild animals).
  • EV calculations become very different in general, and probably glo
... (read more)
4
Anthony DiGiovanni
2y
Agreed—while I expect people's intuitions on which is "better" to differ, a comprehensive accounting of which bullets different views have to bite would be a really handy resource. By "comprehensive" I don't mean literally every possible thought experiment, of course, but something that gives a sense of the significant considerations people have thought of. Ideally these would be organized in such a way that it's easy to keep track of which cases that bite different views are relevantly similar, and there isn't double-counting.
3
Pablo
2y
There are also person-neutral reasons for caring more about the extinction of all terrestrial life vs. human extinction. (Though it would be very surprising if this did much to reconcile person-affecting and person-neutral cause prioritization, since the reasons for caring in each case are so different: direct harms on sentient life, versus decreased probability that intelligent life will eventually re-evolve.)

That's interesting and makes sense — for reference I work in EA research, and I'd guess ~90%+ of the people I regularly engage with in the EA community are really interested / excited about EA ideas. But that percentage is heavily influenced by the fact that I work at an EA organization.

2
ShayBenMoshe
2y
Yeah, that makes sense, and is fairly clear selection bias. Since here in Israel we have a very strong tech hub and many people finishing their military service in elite tech units, I see the opposite selection bias, of people not finding too many EA (or even EA-inspired) opportunities that are of interest to them. I failed to mention that I think your post was great, and I would also love to see (most of) these critiques flashed out.

Thanks for sharing these! It looks like this list ends at H (with some Ls at the beginning). I was wondering if it got cut off, or if that's coincidental?

2
Lizka
2y
Thanks for asking! Unless I got things wrong when I was transferring the Google Doc to the Forum post, there wasn't anything from M-Z or from I-M. (Some organizations on the list didn't have an update this month, apparently, and also the list of organizations is pretty early-alphabet-heavy.)

My spouse shared this view when reading a draft of this post, which I found interesting because my intuitions went somewhat strongly the other way.

I don't really have strong views here, but it seems like are three possible scenarios for realists:

  • Morality follows consistent rules and behave according to a logic we currently use
  • Morality follow consistent rules but doesn't behave according to a logic we currently use
  • Morality doesn't follow consistent rules

And in 2/3 of those, this problem might exist, so I leaned toward saying that this was an issue for reali... (read more)

1
Anthony DiGiovanni
2y
For the record I also don't find that post compelling, and I'm not sure how related it is to my point. I think you can coherently think that the moral truth is consistent (and that ethics is likely to not be consistent if there is no moral truth), but be uncertain about it. Analogously I'm pretty uncertain what the correct decision theory is, and think that whatever that decision theory is, it would have to be self-consistent.

I'd be interested in a survey on this. 

My impression is that realism isn't a majority view among EAs, but is way higher than the general non-religious public / greater tech and policy communities that lots of EAs come out of. 

Though I think this is something I want to see critiqued regardless of realist-ness.

I think I agree with everything here, though I don't think the line is exactly people who spend lots of time on EA Twitter (I can think of several people who are pretty deep into EA research and don't use Twitter/aren't avid readers of the Forum). Maybe something like, people whose primary interest is research into EA topics? But it definitely isn't everyone, or the majority of people into EA.

It probably depends on the area, but probably non-welfare related impact is going to vary by industry significantly. E.g. I imagine that insecticide use has fairly substantial environmental impacts, but that residential insecticides do not. I haven't looked into this at all, but I'd guess there are many ways in which these industries are bad and also good (they all exist because they provide some useful benefit) besides the welfare implications.

I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this - I don't think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are... (read more)

Thanks for sharing this! I think that it is tough that the experiences you list are shared by many other people with ops experience. I also think that something I've witnessed at a lot of organizations is that growth can be somewhat stumbling - e.g. new non-ops staff are added until ops is overwhelmed, and only then are ops staff added.

To mildly shamelessly plug my own employer, Rethink Priorities has been really focusing on offsetting some of these challenges, including doing things like:

  • Having a pay system that doesn't discount ops work - ops staff are p
... (read more)

Hey Charles!

Sure thing! I am really excited about this position. I think the main motivation is that there are a lot of things where it seems like there ought to be summaries of the evidence for what the best practice is on an operational question, but there just isn't good information out there. So, we're hoping that some combination of literature review and self-experimentation can help us ensure we are operating efficiently and intelligently as we grow.

In response to your specific thoughts:

  1. I definitely think our exec teams work on these questions, but w
... (read more)
2
Charles He
2y
Thanks, this is a really informative. This is a really exciting role, I hope the candidates will be fantastic and produce great work!

Hey!

We set the title level for the Special Projects Associate roles for a few reasons: 

  • We think that this could be a valuable way for people new to operations for EA organizations to gain skills.
  • We think that generally these roles would be good learning opportunities for early career EAs to explore ops careers.
  • These roles are fairly generalist

I think it is likely that if someone came in who had a fairly deep background in operations relevant to these roles, we'd basically evaluate them for a different title level on an individual basis.

I think we'd al... (read more)

1
JBPDavies
2y
Many thanks for the quick reply & clarifications Abraham! Looking forward to the information sessions.

Thanks! We are happy to be a good place to work and will keep that idea in mind for the future.

Sorry to callously steal your thunder Peter!

1
Sim
2y
Makes sense; thanks very much both!
Load more