Hide table of contents

The problem

Effective Altruism (EA) is a global movement that seeks to maximize the impact of charitable giving and improve the world. It was originally focused on poverty alleviation and founded on the principles of evidence-based decision making and cost-effectiveness, as demonstrated by the work of GiveWell. However, over time, the focus of EA has shifted towards longtermism. This is bad because 1) as the current polycrisis affecting EA has come entirely from the longtermists, 2) it’s unclear that the overall impact of longtermism is positive, 3) clearly positive impact causes such as global poverty and animal suffering are neglected, 4) large numbers of potential EAs and large amounts of funding are neglected. 

Some examples of EA focus shifting towards longtermism:

  • The EA Handbook promotes longtermism and minimizes other areas. For example, “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity.” It has had minor modifications since then, but still focuses on longtermism and minimizes other causes.
  • The Resources pages of Effective Altruism contains an introduction to EA and information about longtermism. For example, the top recommended books are by longtermist superstars such as William Macaskill, Toby Ord, and Benjamin Todd.
  • The most pressing world problems page at 80,000 hours is almost entirely about longtermism. While it may be the case that EAs know that 80,000 hours is a website dedicated to promoting longtermism, many ordinary users would see it as a site about how to make a positive impact on the world through Effective Altruism.
  • Feedback from London EA Global, where longtermism is promoted with some fairly blatant nudges.

I think it’s fairly uncontroversial that EA has significantly shifted towards longtermism, but happy to discuss this more if needed.

 

Longtermism has damaged Effective Altruism

Longtermism has come under fire in recent times due to the numerous scandals that have been associated with it. These scandals have cast a shadow on the reputation of effective altruism as a whole, and have eroded the public trust in the movement.

Longtermism has also been accused of channeling funding away from other effective altruist causes, against the wishes of the wider EA community. This has led to concerns that longtermism is becoming a self-serving cause that is prioritizing its own interests over the broader goals of effective altruism.

Additionally, longtermists have been criticized for having disproportionate control over decision-making power within effective altruism, yet do not contribute much in return. This has led to frustration and resentment among other effective altruists, who feel that their efforts and contributions are being ignored. 

Finally, the strategy of the core longtermists focuses on “highly engaged EAs” and “mega-donors”. That’s because longtermism is “too weird” for the majority of people interested in EA and potential donors. Unfortunately, this focus on a small group may be beneficial for longtermism, but it can also harm other causes by deterring a large pool of potential supporters and funding.

 

Longtermism has anti-synergy with the rest of Effective Altruism

Trying to forcibly marry longtermism to EA is harmful for both. For example, saying that EA is “talent constrained” makes sense for longtermists, because they are looking for geniuses who have the vision to work on AI alignment research, for example. A slightly less smart person could inadvertently advance AI capabilities instead.[1] However, this makes absolutely no sense for people wanting to improve global poverty or animal suffering. These cause areas are robust to slightly suboptimal efforts, and would highly benefit from more people working in them. This mismatch in the required talent pool is a result of forcing longtermism and EA together.

Another example is that the focus on “mega-donors” makes sense for longtermists, because it is a very “weird” cause area that requires them to work on a potential donor for an extended period of time. But this is not the case for other areas - global poverty is intuitively appealing to many people, and a small nudge will often suffice to get them to donate to more effective charities. These donations would do a huge amount of good, but outreach is de-prioritized because it doesn’t work for longtermism.

Even longtermists realize that this is becoming more generally known, and are attempting to preserve the status quo. For example, a characteristic forum post, “EA is more than longtermism”, attempted to argue that longtermism is not unduly privileged over other causes. Under the “What do we do?” section, this forum post proposed:

I’m not sure, there are likely a few strategies (e.g. Shakeel Hashim suggested we could put in some efforts to promote older EA content, such as Doing Good Better, or organizations associated with causes like Global Health and Farmed Animal Welfare).

It should be fairly clear that “putting in some efforts to promote older EA content” is not likely to have much of an impact. This proposal doesn't seem to be coming from a place of genuine concern, and it seems more focused on defending longtermism rather than actively finding a solution.

 

Reform will not work

Many longtermists have been fairly clear that they are not interested in sharing power. For example, this comment says that “a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies”, and then explains why these longtermists will not be receptive to conventional EA arguments. In fact, a later comment makes this more explicit.

Longtermists seem to have control of the major institutions within EA, which provides them with a certain level of immunity. This has led to a situation where they may not feel the need to show concern for other cause areas. As a result, it may be challenging to bring about any meaningful changes. The current arrangement, where longtermists receive resources for their cause while negative events are attributed to the EA community as a whole, is favorable for them. Therefore, requests for reform or a more balanced distribution of power may only result in further delays.[2]

 

Solution: split longtermism into a separate organization

A more effective solution is to split longtermism into a separate organization. The major benefits would be:

  1. Splitting this would solve the anti-synergy problem mentioned above, bringing more resources to EA overall and doing more good for the world. This change could also attract more potential EAs and donors to join and contribute, as the cause would be more approachable.
  2. Detaching longtermism from effective altruism may help repair the damage caused by the recent scandals, by distancing the rest of EA from the negative image associated with longtermism. This can help regain public trust in the movement and preserve its positive impact..
  3. With longtermism as a separate entity, other causes within EA would have access to the resources they need to succeed, without being held back by any negative associations with longtermism.
  4. Separation could also lead to a more equal distribution of power, giving other causes a stronger voice within the movement and ensuring that all causes are given fair consideration.

 

What are some concerns?

While splitting longtermism and EA would be good, there are some implementation difficulties. I will discuss each in turn.

  1. Longtermists control the levers of power, which makes reform more difficult.
  2. Longtermists control the flow of money, which is necessary for all EA cause areas.
  3. Most EAs are very bad at fighting for their fair share, and would rather focus on “growing the pie”.

Longtermists control the levers of power, which makes reform more difficult.

As discussed previously, a core of longtermists control the power in EA. This means that longtermists would usually be unwilling to split from the rest of EA, since their power would be reduced. In the absence of an external shock, there is no reason for longtermists to share their power with competing causes. 

However, the recent scandals linked to longtermism have provided an external shock that could potentially lead to positive change. These scandals have impacted all causes within EA and provide a unique opportunity for the movement to reconfigure and become stronger.

Though the scandals have certainly dealt a blow to EA, they also present a rare opportunity for growth and improvement. It's important to take advantage of such opportunities to ensure that EA continues to have a positive impact on the world.

 

Longtermists control the flow of money, which is necessary for all EA cause areas.

The source of longtermist power comes from the control of funding. For example, blacklisting is powerful because it can deny funding to opponents. Because people are scared to come forward, it’s hard for opponents to organize. By forcing opponents of longtermism to remain anonymous on the EA forums, or to heavily censor their words, it destroys the ability of opponents to mobilize.

EA operations don’t usually generate cash, so they depend on a continuing stream of cashflow from aligned philanthropists. This is why individuals like Will MacAskill were closely connected with Sam Bankman-Fried.  This is also why longtermists have leveraged this by working on major donors who support other EA causes and converting them to longtermism. Taking money from other causes also gives longtermists more power, although this is only a secondary consideration.

We now have a good opportunity to convince major donors to support traditional causes and split off longtermism into its own institutions. One approach could be to have private discussions with major longtermism donors and ask if longtermism is really the cause they want to be associated with.

 

Most EAs are very bad at fighting for their fair share, and would rather focus on “growing the pie”.

I don’t have a good solution for this - my suspicion is that many EAs in global poverty, animal suffering, and other “traditional” cause areas are uncomfortable with assisting with this type of restructuring action, as it isn’t their area of expertise. Suggestions for overcoming this are welcome.

 

  1. ^

    See this tweet for a discussion of this. 

  2. ^

    I am not saying that most longtermists support this! But these are the revealed preferences of the core EA group, and they are the ones whose opinions matter.

25

0
0

Reactions

0
0

More posts like this

Comments40
Sorted by Click to highlight new comments since: Today at 1:03 PM
Buck
1y36
24
10

I think you're imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don't think that's what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.

My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does--it's not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.

I agree that longtermism's association with EA has some costs for neartermist goals, but it's really not clear to me that the association is net negative for neartermism overall. Perhaps we'll find out.

(I personally like the core EA ideas, and I have learned a lot from engaging with non-longtermist EA over the last decade, and I feel great fondness towards some neartermist work, and so from a personal perspective I like the way things felt a year ago better than a future where more of my peers are just motivated by "holy shit, x-risk" or similar. But obviously we should make these decisions to maximize impact rather than to maximize how much we enjoy our social scenes.)

My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does--it's not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.

This paragraph feels pretty over the top. When you say "resources" I assume you mean that neartermist EAs wouldn't have enough money to maintain the Forum, host EAGs, run EA Funds, etc. This doesn't feel either that accurate, partially on the account that I don't think those infrastructure examples are particularly resource or labour-intensive, or sufficient money is available to make them happen:

  • Forum: Seems like 1-2 people are working FTE on maintaining the forum. This doesn't seem like that much at all and to be frank, I'm sure volunteers could also manage it just fine if necessary (assuming access to the underlying codebase).
  • EA Funds: Again, 1-2 FTE people working on this, so I think this is hardly a significant resource drain, especially since 2.5 of the funds are neartermist.
  • EAGs: Yes, definitely more expensive than the above two bits of infrastructure, but also I know at least one neartermist org is planning a conference (tba) so I don't think this number will fall to 0. More likely it'll be less than it is right now, but one could also reasonably think we currently have more than what is optimally cost-effective.

Overall it seems like you either (1) Think neartermist EA has access to very few resources relative to longtermist EA or (2) that longtermist EA doesn't have as much direct work to spend money on so by default they spend a higher % of total funds on movement infrastructure?

For (1): I would be curious to hear more about this, as seems like without FTX, the disparities in neartermist and longtermist funding aren't huge (e.g. I think no more than 10x different?). Given that OP / Dustin are the largest funders, and the longtermist portfolio of OP is likely going to be around 50% of OP's portfolio, this makes me think differences won't be that large without new longtermist-focused billionaires. 

For (2): I think this is largely true, but again I would be surprised if this led to longtermist EA being willing to spend 50x more than neartermist EA (I could imagine a 10x difference). That said, a few million for neartermist EA, which I think is plausible, would cover a lot of core infrastructure.

The main bottleneck I'm thinking of is energetic people with good judgement to execute on and manage these projects.

How come you think that? Maybe I'm biased from spending lots of time with Charity Entrepreneurship folks but I feel like I know a bunch of talented and entrpreneurial people who could run projects like the ones mentioned above. If anything, I would say neartermist EA has a better (or at least, longer) track record of incubating new projects relative to longtermist EA!

I also think that the value of a nice forum, EAGs, and the EA Funds is lower for non-longtermists (or equivalently the opportunity cost is higher).

E.g. if there was no forum, and the CE folks had extra $ and talent, I don't think they would make one. (Or EAGs, or possibly ACE EA funds).
Also, the EA fund For global health and development is already pretty much just the GiveWell All Grants Fund.

Also worth noting that "all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then." (Comment by lukeprog)

I think a split proposal is more realistic on a multi-year timeframe. Stand up a meta organization for neartermism now, and start moving functions over as it is ready. (Contra the original poster, I would conceptualize this as neartermism splitting off; I think it would be better to fund and grow new neartermist meta orgs rather than cripple the existing ones with a longtermist exodus. I also think it may be better off without the brand anyway.) 

Neartermism has developed meta organizations from scratch before, of course. From all the posts about how selective EA hiring practices are, I don't sense that there is insufficient room to staff new organizations. More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn't currently be interested in affiliating with EA as a whole. So you'd get some experienced hands and a good number of new recruits . . . which is quite a bit more than neartermism had when it created most of the current meta.

In the end, I think neartermism and longtermism need fundamentally different things. Trying to optimize the same movement for both sets of needs doesn't work very well. I don't think the need to stand up a second set of meta organizations is a sufficient reason to maintain the awkward marriage long-term.

Buck
1y26
15
3

Stand up a meta organization for neartermism now, and start moving functions over as it is ready.

As I've said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.

 Neartermism has developed meta organizations from scratch before, of course.

[...]

which is quite a bit more than neartermism had when it created most of the current meta.

I don't think it's fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by people who were compelled by the fundamental arguments for EA (e.g. the importance of cause prioritization, cosmopolitanism, etc). New meta orgs would have to be created by people who are compelled by these arguments but also not compelled by the current arguments for longtermism, which is empirically a small fraction of the most energetic/ambitious/competent people who are compelled by arguments for the other core EA ideas.

More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn't currently be interested in affiliating with EA as a whole. So you'd get some experienced hands and a good number of new recruits

I think this is the strongest argument for why neartermism wouldn't be substantially weaker without longtermists subsidizing its infrastructure.


Two general points:

  • There are many neartermists who I deeply respect; for example, I feel deep gratitude to Lewis Bollard from the Open Phil farmed animal welfare team and many other farmed animal welfare people. Also, I think GiveWell seems like a competent org that I expect to keep running competently.
  • It makes me feel sad to imagine neartermists not wanting to associate with longtermists. I personally feel like I am fundamentally an EA, but I'm only contingently a longtermist. If I didn't believe I could influence the long run future, I'd probably be working on animal welfare; if I didn't believe that there were good opportunities there, I'd be working hard to improve the welfare of current humans. If I believed it was the best thing to do, I would totally be living frugally and working hard to EtG for global poverty charities. I think of neartermist EAs as being fellow travelers and kindred spirits, with much more in common with me than almost all other humans.

While neartermists may be a  "small fraction" of the pie of "most energetic/ambitious/competent people," that pie is a lot larger than it was in the 2000s. And while funding is not a replacement for good people, it is (to a point) a force multiplier for the good people you have. The funding situation would be much better than it was in the 2000s. In any event, I am inclined to think that many neartermists would accept B-list infrastructure if that meant that the infrastructure would put neartermism first  -- so I don't think the infrastructure would have to be as good.

I'm just not sure if there is another way to address some of the challenges the original poster alludes to. For the current meta organizations to start promoting neartermism when they believe it is significantly less effective would be unhealthy from an epistemic standpoint. Taking the steps necessary to help neartermism unlock the potential in currently unavailable talent/donor pools I mentioned above would -- based on many of the comments on this forum -- impair both longtermism's epistemics and effectiveness. On the other hand, sending the message that neartermist work is second-class work is not going to help with the recruitment or retention of neartermists. It's not clear to me what neartermism's growth (or maintenance) pathway is under current circumstances. I think the crux may be that I put a lot of stock in potentially unlocking those pools as a means of creating counterfactual value. 

I understand that a split would be sad, although I would view it more as a sign of deep respect in a way -- as an honoring of longtermist epistemics and effectiveness by refusing to ask longtermists to compromise them to help neartermism grow. (Yes, some of the reason for the split may have to do with different needs in terms of willingness to accept scandal risk, but that doesn't mean anyone thinks most longtermists are scandalous.)

I broadly agree.

My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does--it's not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.

My guess would be that the people who want an EA-without-longtermism movement would bite that bullet. The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.

[This comment is no longer endorsed by its author]Reply

For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.

Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?

(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)

Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.

To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.

Looking at recent EA forum posts in these areas, do EAs investigating how much juvenile insects matter relative to adult ones really have much more in common with ones working on loneliness than they do with ones evaluating reducing x-risk with a more resilient food supply?

I think a split along the lines of how "respectable" your cause area is might be possible (though still not a good idea):

But in each of these buckets I've put at least one thing I think that would normally be called longtermist and one that wouldn't.

I think "respectable" is kind of a loaded term that gives longtermism a slightly negative connotation. I feel like a more accurate term would be how "galaxy brain" the cause area is - how much effort and time do you need to explain it to a regular person, or what percentage of normal people would be receptive to a pitch.

Other phrasings for this cluster include "low inferential distance", "clearly valuable from a wide range of perspectives", "normie", "mainstream", "lower risk", "less neglected", "more conventional", "boring", or "traditional".

This is an excellent point that again highlights the problem of labeling something "Longtermist" when many expect it to transpire within their lifetimes.

Perhaps rather than a spectrum of  "Respectable <-> Speculative" the label could be a more neutral (though more of a mouthful) "High Uncertainty Discounting <-> Low Uncertainty Discounting"

A further attempt at categorization that I think complements your "Respectable <-> Speculative" axis.

I've started to think of EA causes as sharing (among other things) a commitment to cosmopolitanism (ie neutrality with respect to the distance between the altruistic actor and beneficiary), but differing according to which dimension is emphasized i) spatial distance (global health, development), ii) temporal difference (alignment), or ii) "mindspace" distance (animal welfare).

I think a table of "speculativeness" vs "cosmopolitanism type" would classify initiatives/proposals pretty cleanly, and might provide more information than "neartermism vs longtermism"?

I like this categorization, but I'm not sure how well it accounts for the component of the community that is worried about x-risk for not especially cosmopolitan reasons. Like, if you think AI is 50% likely to kill everyone in the next 25y then you might choose to work on it even if you only care about your currently alive friends and family.

Which isn't to say that people in this quadrant don't care about the impact on other people, just that if the impact on people close to you is large enough and motivating enough then the more cosmopolitan impacts might not be very relevant?

Fair point. I'm actually pretty comfortable calling such reasoning "non-EA", even if it led to joining pretty idiosyncratically-EA projects like alignment.

Actually, I guess there could be people attracted to specific EA projects from "non-EA" lines of reasoning across basically all cause areas?

I'm actually pretty comfortable calling such reasoning "non-EA"

Very reasonable, since it's not grounded in altruism!

I'm surprised this post didn't consider any of the benefits of longtermists being part of EA

Buck
1y22
10
2

and then explains why these longtermists will not be receptive to conventional EA arguments.

I don't agree with this summary of my comment btw. I think the longtermists I'm talking about are receptive to arguments phrased in terms of the classic EA concepts (arguments in those terms are how most of us ended up working on the things we work on).

What makes the best solution the longtermists breaking off, instead of everyone else breaking off?

I more or less agrees with this post that (1) longtermism is dominant,  (2)  longtermism is a bad cause area, and (3) longtermism is bad for PR reasons.  But I don't think we can divorce EA from a cause area a majority of its members (and associated organizations!) find compelling.  Even if we could, the PR damage that's already been caused wouldn't go away.

So it seems more realistic for exclusively near-termist EAs to try to carve out a separate space for ourselves.  Obviously that's a huge logistical task.  I don't really expect it to be successful.  But I rate its chances of success higher than cutting longtermism out of EA.

For the record, if anyone is willing to coordinate a split of global poverty and animal rights EAs who wish to improve their optics, even at the expense of epistemics, from EA as a whole, I would gladly be willing to assist, despite not being in that group. Let me know if anyone wants help on this.

There's a real issue here but I dislike the framing of this post.

Throughout the text it casts neartermism as "traditional EA" and longtermism as an outside imperializing force. I think this is both historically inaccurate, and also rather manipulative.

I think it is pretty important that, by its own internal logic, longtermism has had negative impact. The AI safety community probably accelerated AI progress. Open AI is still pretty connected to the EA community and has been starting arm races (at least until recently the 80K jobs board listed jobs at Open aI). This is well known but true. Longtermism has also been connected to all sorts of scandals.

As far as I can tell neartermist EA has been reasonably successful. So its kind of concerning that institutional EA is dominated by longtermists. Would be nice to have institutions run by people who genuinely prioritize neartermism.

I think the problem here is that it makes a category mistake about how the move to longtermism happened. It wasn't because of any success or failure metric that moved things but the actual underlying arguments becoming convincing to people. For example,  Holden Karnofsky moving from founding Givewell to heading the longtermist side of OpenPhil and focusing on AI.

The people who made neartermist causes successful chose on their own accord to move to the longtermist. They aren't being coerced away. GHW donations are growing in absolute terms. The weird feeling that there isn't enough institutional support isn't a funding problem it's a weird vibes problem.

Additionally, I don't even know if people would say longtermism has had a negative impact outside of the doomiest people given it also accelerated alignment organisations (obviously contingent on your optimism on solving alignment). Most people think there's decent headway insofar as Greg Brockman is talking about alignment seriously and this salience doesn't spiral into a race dynamic.

Is the idea of an EA split to force Holden back to Givewell? Is it to make it so that Ord and Macaskill go back to GWWC? I just find these posts kind of weird in that they imagine people being pushed into longtermism forgetting that a lot of longtermists were neartermists at one point and made the choice to switch.

I think OP’s idea is not to get longermists to switch back, but to insulate neartermists from the harms that one might argue come from sharing a broader movement name with the longtermist movement.

Buck
1y29
9
3

Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).

One issue for me is just that EA has radically different standards for what constitutes "impact." If near-term: lots of rigorous RCTs showing positive effect sizes.

 If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . .  BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact. 

Also: https://twitter.com/moskov/status/1624058113119645699

Some anecdotal support for the idea that longtermism has tarred the reputation of neartermist EA: one person in a non-EA Discord server I manage has compared general EA and longtermism to a motte and bailey respectively.

I think I have a confusion here which is what you mean by split because a lot of the things you say are splits (e.g. financial splits) are already done in EA by worldview diversification? Is your argument OpenPhil should break into OpenPhil LTism and OpenPhil GHW (because they already kind of do this with 2 CEOs)? 

I'm just left confuse what you mean by equal distribution of money too because a lot of your problems are optics based ones but your solutions are financial splits.

EVF is an umbrella org that manages community events, community health, public comms like EA handbook and curriculum, press interface, etc largely through CEA. It handles these tasks for both longtermism and the rest of EA. This is suboptimal IMO. The solution here is not just a financial split.

Might be worth noting that OpenPhil is kind of split up in this way already and has two equal co-CEOs for the two areas.

(And I think this at least partially contradicts your point "Longtermists control the flow of money" as the main funding org in EA is split between longtermist and non-longtermist funding with neither of the parts controlling the other.)

It's a bit undertheorized in this post why people are longtermist, and thus why longtermism now has such a large role in EA. You paraphrase a comment from Buck:

why these longtermists will not be receptive to conventional EA arguments

This suggests a misunderstanding to me. It was these conventional arguments that led EA funders and leaders to longtermism! If EA is a question of how to do the most good, longtermism is simply a widely  agreed upon answer. 

In fact, per the 2020 Rethink survey, more engagement in EA was associated with more support for longtermist causes. (Note this was before FTX  became widely known, the Future Fund existed, and What We Owe the Future released). 

I think there may be good reasons to create some distance between cause areas, but telling the most engaged EAs they need to start their own organization doesn't seem very sensible. 

(Note also that the EA community does not own its donor's money.)

Curated and popular this week
Relevant opportunities