One cause of tension within EA is between those who celebrate the increasing focus of highly-engaged EAs and major orgs on long-termism and those who embrace the older, short-termism focus on high standards of evidence. While I tend to fall into the former group, I'm also concerned by some of the comments I've heard from people in the latter group wondering how much of a place there is for them in EA.

Like many questions there's no reason to limit our answer to a binary "yes" or "no". There are different degrees and ways in which we could be explicitly long-termist and the main such ways need to be considered separately.

The simplest question to address is whether organisations that have come to embrace long-termism in both their thought and actions should explicitly adopt that label[1]. I think that they have to; both because of the importance of trust in terms of cooperation and because such a transparent trick would never work over the long-term. If any dissatisfied short-termists focus their frustration on such explicit declarations, then I would see this as both misguided and counterproductive since it's important for these organisation to be open about where they stand.

The next question is how much should the increasing support of long-termists among highly-engaged EAs and orgs affect the distribution of resources[2]. Perhaps there would be a less contentious way of framing this issue, but I think it's important to face this issue openly, rather than to discuss it in the evasive terms that a politician might use. Again, I think the answer here is very simple and that, of course, it should affect the distribution. I'm not suggesting that the distribution of resources should be a mere popularity contest, but insofar as we respect the opinions of people within this group it ought to cause some kind of Bayesian update[3]

I guess the last question I'll consider is to what degree it ought to affect the distribution of resources[4]. Firstly, there are the moral uncertainty arguments which Will Macaskill has covered sufficiently such that there's no need for me to go over it here.

Secondly, many of the short-term projects that EA has pursued have been highly effective and I would see it as a great loss if such projects were to suddenly have the rug pulled out from underneath them. Large shifts and sudden shifts have all kinds of negative consequences from demoralizing staff, to wasting previous investments in staff and infrastructure, to potentially bankrupting what would otherwise have been sustainable.

Stepping beyond the direct consequences, I would also be worried in terms of what this means for what really appears to have been a highly beneficial alliance between people favoring different cause areas which I believe to have benefited each cause area up until now. Many EA groups are rather small. There appears to be a minimum critical-mass in order for a group to be viable and if too many short-termists were to feel unsupported[5] many groups might not be able to achieve this critical mass. This is especially concerning given how many long-termists (myself included) passed through a period of short-termism first.

There are also important economies of scale, such as having a highly-skilled movement builder promoting EA in general, rather than a specific cause area. So too with having some kind of national infrastructure for donation routing and organising a local EAGx. Long-termist organisations also benefit from being able to hire ops people who are value-aligned, but not explicitly long-termist.

Beyond this, I think there are significant benefits of EA pursuing projects which provide more concrete feedback and practical lessons than long-termist projects often provide. I see these projects as important for the epistemic health of the movement as a whole.

Perhaps it feels like I'm focusing too much upon the long-termist perspective, but my goal in the previous paragraphs was to demonstrate that even from a purely long-termist perspective too much of a shift towards long-termism would be counterproductive[6].

Nonetheless, the increasing prominence of long-termism suggests that EA may need to rebalance its relationship towards short-termist projects in such a way that respects both the increasing prominence of long-termism and the valuable contributions of short-termists.

You may be wondering, is such a thing even possible? I think it is, although it would involve shifting some resources dedicated towards short-termism[7] from supporting short-termist projects to directly supporting short-termists[8]. I think that if the amount of resources available is reduced, it is natural to adopt a strategy that could be effective with smaller amounts of money[9].

And the strategy that seems most sensible to me would be to increase the focus on incubation and seed funding, with less of a focus on providing long-term funding, though a certain level of long-term funding would still be provided by dedicated short-termist EAs[10]. I would also be keen on increased support for organisations such as Giving What We Can, Raising for Effective Giving and Founder's Pledge as insofar as they can attract funding from outside the EA community to short-termist projects, they can make up some of the shortfall from shifting the focus of the EA community more towards long-termism.

Beyond this, insofar as organisations lean heavily long-termist, it makes sense to create new organisations to serve EAs more generally. The example that immediately springs to mind is how in addition to 80,000 Hours, there are now groups such as Probably Good and Animal Advocacy Careers.

Alternatively, some of these problems could be resolved by long-termists building their own infrastructure. For example, the existence of the Alignment Forum means that the EA Forum isn't overrun with AI Safety Discussion. Similarly, if EAG is being overrun with AI Safety people, it might almost make sense to run two simultaneous conferences right next to each other such that people interested in global poverty are able to network with each other or they can walk over to the next hall if they want to mix with some people interested in AI Safety.

So to summarise:

  • Organisations should be open about where they stand in relation to long-termism.
  • The distribution of resources should reflect the growing prominence of long-termism to some degree, but it would be a mistake to undervalue short-termism, especially if this led the current alliance to break down[11].
  • There should be less of a focus on providing long-term funding for direct short-termist work and more focus on incubation, seed funding and providing support for short-termists within the EA community. This would increase the amount of resources available to long-termist projects, whilst also keeping the current alliance system strong.
  • Long-termists should also develop their own institutions as a way of reducing contention over resources.
  1. ^

    This isn't a binary. An organisation may say, "Our strategy is mostly focused upon long-termist aims, but we make sure to dedicate a certain amount of resources towards promising short-termist projects as well".

  2. ^

    I'm using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.

  3. ^

    The focus of this article is not so much on arguing in favour of long-termism - other people have covered this sufficiently - but in thinking through the strategic consequences of this.

  4. ^

    One point I don't address in the main body of the text is how much of a conflict there actually is between investing resources in long-termism and short-termism. This is especially the case in terms of financial resources given how well-funded EA is these days. I guess I'm skeptical of the notion that we wouldn't be able to find net-positive things to do with it, but even if there isn't really a conflict now, I suspect that there will be in the near future as AI Safety projects scale up.

  5. ^

    I'm not entirely happy with this wording, as the goal isn't merely to make short-termists feel supported, but to actually be providing useful support in terms of allowing them to have the greatest impact possible.

  6. ^

    I've intentionally avoided going too much into the PR benefits of short-termism for EA because of the potential for PR concerns to distort the epistemology of community, as well as being corrosive to internal trust. Beyond this, it is could easily be counter-productive because people can often tell if you're just doing something for PR, as well as being extremely patronising towards short-termists. For these reasons I've focused on the model of EA as a mutually beneficial alliance between people who hold different views.

  7. ^

    Here I'm primarily referring to resources being donated to short-termism by funders that aren't necessarily short-termist themselves, but who dedicated some amount of funding due to factors such as moral uncertainty. These funders are most likely to be sympathetic to the proposal I'm making here.

  8. ^

    I accept there are valid concerns about the tendency of organisations to focus on the interests of in-group members at the expense of the stated mission. On the other hand, I see it as equally dangerous to swing too much in the other direction where good people leave or become demoralised because of insufficient support. We need to chart a middle course.

  9. ^

    Large-scale poverty reduction projects cost tens of millions of dollars, so a small percentage reduction in the amount of money dedicated towards short-termist projects would enable a significant increase in the amount of money for supporting short-termists.

  10. ^

    Additionally, I'm not claiming that persuadable EAs should move completely to this model; just somewhere along this axis.

  11. ^

    Stronger: In any kind of relationship, you really don't want to be anywhere near the minimal level of trust or support to keep things together.

17

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:33 AM
Joey
2y12
0
0

"Organisations should be open about where they stand in relation to long-termism."

Agree strongly with this. One of the most common reasons I hear for people reacting negatively to EA is feeling tricked by self-described "cause open organizations" that really just focus on a single issue (normally AI).

Strongly upvoted. Which organizations are those?

This is interesting and I'm glad you're bringing the discussion up. I think your footnote 2 demonstrates a lot of my disagreements with your overall post:

I'm using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.

Consider this section: 

Secondly, many of the short-term projects that EA has pursued have been highly effective and I would see it as a great loss if such projects were to suddenly have the rug pulled out from underneath them. Large shifts and sudden shifts have all kinds of negative consequences from demoralizing staff, to wasting previous investments in staff and infrastructure, to potentially bankrupting what would otherwise have been sustainable.

As a practical matter, Alexander Berger (with a neartermism focus) was promoted to a co-ED position at Open Phil, and my general impression is that Open Phil very likely intends to spend much more $s on neartermism efforts in the foreseeable future. So I think it's likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded (and likely with larger sums) going forwards, rather than "have the rug pulled out from underneath them."

Also:

You may be wondering, is such a thing even possible? I think it is, although it would involve shifting some resources dedicated towards short-termism[7] from supporting short-termist projects to directly supporting short-termists[8]. I think that if the amount of resources available is reduced, it is natural to adopt a strategy that could be effective with smaller amounts of money[9].

You mention in footnote 2 that you're using the phrase "resources" very broadly, but now you're referring to money as the primary resource. I think this is wrong because (especially in LT and meta) we're bottlenecked more by human capital and vetting capacity. 

This confusion seems importantly wrong to me (and not just nitpicking) , as longtermism efforts are relatively more bottlenecked by human capital and vetting capacity, while neartermism efforts are more bottlenecked by money. So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects. Getting more vetting capacity from LT people in return for $s from NT people seem like a bad trade on both fronts.

So I think it's likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded going forwards, rather than "have the rug pulled out from underneath them.

Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it's easy to make the mistake of saying "We'll never get to point X" and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we'll end up?

So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects.

This is an excellent point and now that you've explained this line of reasoning, I agree.

I guess it's not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it's not clear to me, although it's certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation.

I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you're willing to just throw money at the issue. Like there has to be something that's the equivalent of GiveDirectly for long-termism.

Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it's easy to make the mistake of saying "We'll never get to point X" and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we'll end up?

Asking that question as a stopping point doesn't resolve the ambiguity of which of this is theoretical vs. practical. 

If the increasing prominence of long-termism like that, in terms of different kinds of resources consumed relative to short-termist efforts, is only theoretical, then the issue is one worth keeping in mind for the future. If it's a practical concern, then, other things being equal, it could be enough of a priority that determining which specific organizations should distinguish themselves as long-termist may need to begin right now. 

The decisions different parties in EA make on this subject will be the main factor determining 'where we end up' anyway.

I can generate a rough assessment for resources other than money of what expectations near-termism vs. long-termism is receiving and can anticipate for at least the near future. I can draft an EA Forum post for that by myself but I could co-author it with you and one or more others if you'd like.

Tbh, I don't have a huge amount of desire to produce more content on this topic beyond this post.

Strongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from '4' to '3', which meant someone downvoted it. I don't know why and I consider it responsible of downvoters to leave a comment as to why they're downvoting but it doesn't matter because I gave this comment more karma than can be taken away so easily.

I don't feel strongly about this, but I think there shouldn't be responsibility to explain normal downvotes if we don't expect responsibility for explaining/justifying normal upvotes. 

I think strong downvotes for seemingly innocuous comments should be explained, and it's also polite (but not obligatory) for someone to give an explanation for downvoting a comment with net negative karma (especially if it appeared to be in good faith). 

Summary: More opaque parts of discourse like up/downvoting are applied with standards so inconsistent and contextual that I consider it warranted for anyone to make a proposal for more objective, consistent and clear standards. I thought yours here was a comment especially undeserving of an unexplained downvote, so I wanted to leave a signal countering a notion the downvote was worthwhile at all.

I prefer both upvotes and downvotes are clarified, explained or justified. I doubt that will become a normal expectation. Yet in my opinion it's warranted for me or any other individual to advocate for a particular (set of) standard(s) since that is better than the seeming alternative of discourse norms being more subjective and contextual as opposed to objective and consistent.

I don't have a problem with others not starting a comment reply with 'upvoted' or 'downvoted' like I sometimes do if reaction is expressed in other ways. I received a downvote the other day and there was only one commenter. He didn't tell me he downvoted me but he criticized the post for not being written clearly enough. That's okay.

What frustrated me about you comment is of sufficient quality that I expect the downvote was likely because someone did not like what you said on a polarized subject. I.e., someone didn't like it based on a perception it was too biased in favour of short-termism or long-termism. They may have a disagreement but if they don't express it on an important topic and it's emotive negative reaction when you're only trying to be constructive, their downvote is futile. That's something I've seen conversations off the EA Forum being the most common reason for downvotes on the EA Forum. Given the effort you put into a constructive comment, I wanted to counter this egregious case as having been pointless.

One aspect is that working to prevent existential disasters (arguably the most popular longtermist cause area) may be comparatively effective also from the perspective of the relatively near future.

Also, some EA orgs are already quite explicitly longtermist, as far as I can tell (e.g. Longview Philanthropy).

Is there an assessment of how big this problem really is? How many people distributed across how many local EA groups are talking about this? Is there a proxy/measure for what impact these disputes are having?

Curated and popular this week
Relevant opportunities