NB: I think EA spending is probably a very good thing overall and I’m not confident my concerns necessarily warrant changing much. But I think it's important to be aware of the specific ways this can go wrong and hopefully identify mitigations. Thanks to Marka Ellertson, Joe Benton, Andrew Garber, Dewi Erwan, Joshua Monrad and Jake Mendel for their input. 

Summary

  • The influx of EA funding is brilliant news, but it has also left many EAs feeling uncomfortable. I share this feeling of discomfort and propose two concrete concerns which I have recently come across.
  • Optics: EA spending is often perceived as wasteful and self-serving, creating a problematic image which could lead to external criticism, outreach issues and selection effects.
  • Epistemics: Generous funding has provided extrinsic incentives for being EA/longtermist which are exciting but also significantly increase the risks of motivated reasoning and make the movement more reliant on the judgement of a small number of grantmakers.
  • I don’t really know what to do about this (especially since it’s overall very positive), so I give a few uncertain suggestions but mainly hope that others will have ideas and that this will at least serve as a call to vigilance in the midst of funding excitement.

Introduction

In recent years, the EA movement has received an influx of funding. Most notably, Dustin Moskovitz, Cari Tuna and Sam Bankman-Fried have each pledged billions of dollars, such that funding is more widely available and deployed

This influx of funding has completely changed the game. First and foremost, it is wonderful news for those of us who care deeply about doing the most good and tackling the huge problems which we have been discussing for years. It should accelerate our progress significantly and I am very grateful that this is the case. But it has also had a drastic effect on the culture of the movement which may have unfortunate consequences. 

A few years ago, I remember EA meet-ups where we’d be united by our discomfort towards spending money in fancy restaurants because of the difference it could make if donated to effective charities. Now, EA chapters will pay for weekly restaurant dinners to incentivise discussion and engagement. Many of my early EA friends also found it difficult to spend money on holidays. Now, we are told that one of the most impactful things university groups can do is host an all-expenses-paid retreat for their students. 

I should emphasise here that I think these expenditures are probably good ideas which can be justified by the counterfactual engagement which they facilitate. These should probably continue to happen, however uncomfortable they make us feel. 

But the fact that these decisions can be justified on one level doesn’t mean that they don’t also cause concrete problems which we should think about and mitigate.

Big Spending as an Optics Issue

Over the past few months, I’ve heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that’s before they hear about the Bahamas visitor programme…

In fact, I have recently found myself responding to spending objections more often than the standard substantive ones (e.g. what about my favourite charity?, can you really compare charities with each other?, what about systemic issues?). 

I am not contesting here whether these programmes are worth the money. My own view is that most of them probably are and I try to lay this out to those who ask. But it is the perceptions which I find most concerning: many people see the current state of the movement and intuitively conclude that lots of EA spending is not only wasteful but also self-serving, straying far from what you’d expect the principles of an ‘effective altruism’ movement to be. Given the optics issues which have hindered the progress of EA in the past, we should be wary of this dynamic. 

Importantly, I’ve heard this claim not only from critics of EA, but also from committed group members and an aligned student who might otherwise be more involved. This suggests that aside from opening us up to external criticism from people who don’t like EA anyway, spending optics may also hinder outreach and lead to selection effects, whereby proto-EAs who are uncomfortable with how money is spent are put off the movement and less likely to get involved. (I am grateful to Marka Ellertson and Joshua Monrad, who both raised versions of this valuable point.)

Longtermism vs Neartermism

One especially problematic framing concerns the apparent discrepancy between longtermist and neartermist funding. Many people find it understandably confusing to hear that ‘EA currently has more money than it can spend effectively’ whilst also noticing that problems like malaria and extreme poverty still exist, especially given how much EA focuses on how cheap it is to save a life and how important it is to practise what we preach. 

I don’t claim that more money should necessarily go to neartermist areas, but I fear that excellent people who initially come to EA through a global health or animal welfare route may be put off by this dynamic and leave the movement entirely, especially if it isn’t explained with nuance and sensitivity. This is a comment which I have heard repeatedly over recent months and I am concerned that it could become a significant obstacle to EA movement-building, including for future longtermists. 

Coordination and the Unilateralist’s Curse

Longtermists often mention the unilateralist’s curse as a problem associated with various x-risks. Even if the vast majority of altruistic actors behave sensibly, it only takes one reaching a different decision to the group to cause the catastrophe. It seems to me that similar dynamics exist with EA spending. Even if most funders are careful with regard to the optics, it only takes one misstep to attract headlines and stick in people’s heads. Given past experience with ‘earning to give’, this should be especially concerning for the movement.  

Financial Incentives as an Epistemics Issue

Several years ago before the increase in funding, it didn’t pay to be EA. In fact, it was rather costly: financially costly because it usually involved a commitment to give away a lot of one’s resources, and socially costly because most people have an intuitive aversion to EA principles. As a result, most people around EA were probably there because they had thought hard and were really convinced that it was morally right. 

In 2022, this is no longer necessarily the case. Suddenly, being an EA is exciting for a bunch of extrinsic reasons. College-age EAs have the chance to be flown around the world to conferences, invited to all-expenses-paid retreats and offered free dinners as an incentive for engaging with the community and the content. 

As stated before, this is very exciting and a great thing. Generous funding gives us the chance to set ambitious visions to make EA huge on campuses around the world and get the best talent working on the biggest problems. Moreover, it can improve our diversity by making careers such as community-building accessible to people from different socioeconomic backgrounds. But it also risks clouding our judgement as individuals and as a movement. 

Consider the case of a college freshman. You read your free copy of Doing Good Better and become intrigued. You explore how you can get involved. You find out that if you build a longtermist group in your university, EA orgs will pay you for your time, fly you to conferences and hubs around the world and give you all the resources you could possibly make use of. This is basically the best deal that any student society can currently offer. Given this, how much time are you going to spend critically evaluating the core claims of longtermism? And how likely are you to walk away if you’re not quite sure? Anecdotally, I’ve spoken to several organisers who aren’t convinced of longtermism but default to following the money nevertheless. I’ve even heard (joking?) conversations about whether it’s worth 'pretending' to be EA for the free trip. 

When my friends in finance (not earning to give) tell me they’re working at Goldman to improve the world, I am normally sceptical. Psychology literature on motivated reasoning and confirmation bias suggests that we are excellent at finding views which justify whatever is in our interests. For example, one study shows that our moral judgements can be significantly altered by financial incentives; another shows that we naturally strengthen our existing views by holding confirming and disconfirming evidence to different standards. 

Fortunately, unlike with finance careers, I think that longtermist careers are likely to be among the most impactful available to us. But given the financial incentives, I would expect it to be very difficult to notice if either longtermism as a whole or specific spending decisions turned out to be wrong. Research suggests that when a lot of money is on the line, our judgement becomes less clear. It really matters that the judgement of EAs is clear, so having a lot of money on the line should be cause for concern. 

This is especially problematic given the nature of longtermism, simultaneously the best-funded area of EA and also the area with the most complex philosophy and weakest feedback loops for interventions. 

Maybe this risk is mitigated by the fact that grantmakers in EA set these incentives by deciding where the money goes, and their judgements are careful and well-calibrated from years of experience, evaluation and excellent in-house research. This seems plausible to me. But if strong incentives are shifting our epistemic confidence from the movement as a whole to a small number of grantmakers, this is something we should at least notice. 

What can we do differently? 

I’m really not sure what the answer to this is, especially because I think most of these funding opportunities seem very good, so we shouldn’t stop them. I’m mainly putting this out there to start a conversation because I’m not sure how aware we are of these dynamics (I wasn’t until recently and others seem to think it is a concern which isn’t discussed enough, perhaps for some of the reasons stated above). 

A few initial thoughts, not proposed with particular confidence:

  • Can we create better resources for how to talk about the spending when it comes up, just like we have for substantive objections to EA? For example, accessible posts on why retreats / conferences / free dinners are considered good value for money under rigorous evaluative frameworks.
  • (From Andrew) Along these lines, it could be valuable for university groups to conduct and publish some rough cost-benefit analyses on major programs (e.g. running a retreat, budgeting for socials, book and cookie giveaways, deciding whether to get an office). This is probably a good exercise for general EA thinking, but it might also help reduce some wastefulness by making EA groups think more about how they use money.
    • A counter would be that this process takes time which could be spent on directly valuable activities - though for the reasons stated above, we should perhaps be sceptical of arguments which justify spending without thinking.
  • It would be helpful to lay out clearly what money is available to which parts of the EA movement and what it can and can’t do. This would help clarify questions such as: “if EA has more money than it can spend effectively, why isn’t it giving more to AMF / why is it still encouraging people to donate to AMF / why can’t it just solve biorisk through brute financial force”. This post is a great start.
  • We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive.
  • Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind. Maybe this is already happening - I am certainly not well-placed to assess the ecosystem.
  • (From Joe) Where appropriate, it should be made clear that grants aren’t conditional on agreement with the community. Funding criticism is a great start, but many people receiving grants (e.g. for travel) may still feel that there’s an implicit expectation for them to agree with the funder’s view, and we should make it clearer to people when this is not the case.
    • Note, for example, that people who receive EA funding may find it more difficult to publish a critical piece like this, given the benefits which they derive from the status quo, perceptions of hypocrisy and feelings of betrayal towards the people funding them. As more EAs come to benefit from EA funding, this problem may grow.
    • In this vein and if we think this is a big enough concern, perhaps we should encourage more criticism specifically relating to how funding is deployed?
  • Should we re-emphasise the norm of significant giving? Money donated to top global health / animal welfare charities can still do a huge amount of good and taking this seriously as a community would help us avoid the mindset whereby the most impactful things we can do involve taking money rather than giving.
    • A counter is that this may distract from other longtermist priorities which are much more valuable, but it might help with both optics and epistemics.
  • (From Joe) At the very least, we should make the opportunity cost of funding more salient. EA was predicated on recognising the trade-offs inherent to altruistic decisions, and we shouldn’t forget that every ~$5,000 spent on speculative longtermist initiatives statistically costs a life in the short term. This is a significant responsibility which we shouldn't take lightly, yet current free-spending norms point the other way.
    • Although we should often be willing to accept time-money trade-offs, there are some cases where norm shifts could go along way, such as putting students up in cheaper hotels, booking flights further in advance, or selecting cheaper flights where inconvenience is minimal (rather than treating money as no object).
    • While this wouldn’t necessarily change our actions significantly, having a culture where this is collectively acknowledged would reduce the problematic impression that we’ve stopped appreciating the value of money.

Do you agree with the problems I've raised? If so, how do you think we can mitigate them? 

546

187 comments, sorted by Click to highlight new comments since: Today at 2:14 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn't take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.

This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don't currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it's at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.

Precisely. Also, the frugality of past EA creates a selection effect, so probably there is a larger fraction of anti-frugal people outside the community (and among people who might be interested) than we would expect from looking inside it.

My anecdotal experience hiring is that I get many more prospective candidates saying something like "if this is so important why isn't your salary way above market rates?" than "if you really care about impact, why are you offering so much money?" (Though both sometimes happen.)

I agree that it’s possible to be unthinkingly frugal. It’s also possible to be unthinkingly spendy. Both seem bad, because they are unthinking. A solution would be to encourage EA groups to practice good thinking together, and to showcase careful thinking on these topics.

I like the idea of having early EA intro materials and university groups that teach BOTECs, cost-benefit analysis, and grappling carefully with spending decisions.

This kind of training, however, trades off against time spent learning about eg. AI safety and biosecurity.

Great point! I think each spending strategy has its pitfalls related to signalling.

I think this correlates somewhat with people's knowledge/engagement with economics, and political lean. The "frugal altruism" will probably attract more left leaning people, while "spending altruism" probably attracts more right leaning people

1[comment deleted]6mo

1)  One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.

I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:

  • Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not  serious about maximising impact (e.g. this comment).
  • Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there's a good chance of failure, due to a fear of not being able to justify the spending.
  • Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
  • Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.

However, we need new hard-to-fake signals of seriousn... (read more)

One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism

Agree.

Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:

  • Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
  • Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or fun
... (read more)

Random but in the early days of YC they said they used to have a "no assholes" rule, which mean they'd try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.

5Aleks_K5mo
Seems like a great rule. Do you know why they don't have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn't (anymore) be a feasible for the EA community.)
5Charles He5mo
Hey, do you happen to know me in real life and would be willing to talk about these issues offline? I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting. I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all. (Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors). But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language. One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good. Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues. To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
2weeatquince5mo
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
2Charles He5mo
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post. However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion. It seems more communication is good. It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.

Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.

I don't think this applies to most of the current generation of  community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default). 

8Manuel_Allgaier5mo
Interesting! I work in CB full-time (Director of EA Germany), and my impression is still that it's challenging work, pays less than what I and my peers would earn elsewhere and most of the CB roles still have a lot less status than e.g. being a researcher who gets invited to give talks etc. Do you think some CBs are motivated by money or status? What makes you think so? I'm genuinely curious (though no worries if you don't feel like elaborating).
3Ben Jamin5mo
I think I am mostly comparing to how different my impression of the landscape of a few years ago is to today's landscape. I am mostly talking about uni groups (I know less about how status-y city groups are), but I there were certainly a few people putting in a lot of hours for 0 money and not much recognition from the community for just how valuable their work was. I don't want to name specific people I have in mind, but some of them now work at top EA orgs or are doing other interesting things and have status now, I just think it was hard for them to know that this is how it would pan out so I'm pretty confident they are not particularly status motivated. I'm also pretty confident that that most community builders I know wouldn't be doing their job on minimum wage even if they thought it was the most impactful thing they could do. That's probably fine, I just think they are less 'hardcore' than I would like. Also being status motivated is not neccesarilly a bad thing, I'm confused about this but it's plausibly a good thing for the movement to have lots of status motivated people to the degree that we can make status track the right stuff. I am sure that part of why I am less excited about these people is a vibes thing that isn't tracking impact.

Something I like about "Doing high upside things even if there's a good chance they might not work out and seem unconventional" as a mark of seriousness is that it's its own form of sacrifice: being willing to look weird and fail and give up on full security and job comfort and do something hard because it's positive EV.

In your list of new hard-to-fake signals of seriousness I like.

Doing high upside things even if there's a good chance they might not work out and seem unconventional.

I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won't work.

 I'm much more excited when I meet young people who keep trying a bunch of things that seem plausibly very high value and give them lots of information relative to people that did some ok-ish things that let them build a track record/status. Fwiw I think that some senior EAs do track these high EV high-risk things really well, but maybe the general perception of what people ought to do is too close to that of the non-EA world.

4Agrippa5mo
I would expect detrimental effects if nerding out became even more of a paid-attention-to signal. It's something you can do endlessly without ever helping a person. But maybe you just mean "successfully making valuable intellectual contributions", in which case I agree.
3wANIEL4mo
Agreed. There seems to be what I can best call an intellectual aesthetic that drives about 1/2 instances of "nerding out" that I observe in the [East] Bay Area. The contrast between the Bay Area attitude and the Oxford attitude, the latter of which I guess applies to Ben Todd, has continually surprised me, and this variable of location may be dispositive over whether "nerding out" is evidence of desirable character.

Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.

Extra ideas for the idea list: 

  • Altruistic perks, rather than personal perks. E.g.1. Turn up at this student event and got $10 donated to a charity of your choice. E.g.2. donation matching schemes mentioned in job adverts, perhaps funded by offering maybe slightly lower salaries. Anecdotally I remember the first EAish event I went to had money to charity for each attendee and free wine and it was the money to charity that attracted me to go, and free wine that attracted my friend, and I am still here and they are not involved.
  • Frugality options, like an optional version of the above idea. E.g.1. when signing up to an EA event the food options could be: "[]vegan, []nut free, []gluten free, []frugal - will bring my own lunch please donate money saved to charity x". E.g.2. Jobs could advertise the organisation offers salary sacrifice schemes that some employees take. I don’t know how well this would work but would be interested to see a group try. Anecdotally I know some EAs in well paid jobs take lower salaries than they are offered but I don’t think this is well known.

 

Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic  with clear actions to take.

I would love frugality options!

+1, the frugality options seem like a nice way to "make the opportunity cost of funding more salient" without necessarily requiring huge changes from event organizers.

6Yitz6mo
+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).

+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.

I worry that it'd feel pretty fake for people who actually care about counterfactual impact. Money goes from EA sources to EA sources both ways. 

Most EAs I've met over the years don't seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.

That said, I like options and norms that discourage fancy options that don't come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn't make sense to pay extra for a nice room.

I think I agree with this. I think if I look historically at my mistakes in spending money, there was very likely substantially more utility lost from spending too little money rather than spending too much money. 

To be more precise, most of my historical mistakes do not come from consciously thinking about time-money tradeoffs and choosing money instead of time ("oh I can Uber or take the bus to this event but Uber is expensive so I should take the bus instead") but from some money-expensive options not being in my explicit option set to prioritize in the first place ("oh taking the bus will take four hours total so I probably shouldn't attend the event") . 

As I get in the habit of explicitly valuing my time often and trying to consider ways to buy time, I notice more and more options that my younger (and poorer) self would not even consider to be in the option set (e.g. international flights to conferences, cleaners, ordering food, paying money to alleviate bureaucracy hurdles, etc).  Admittedly this coincided with the EA movement generally being much more spendthrift (and also there being far more resources now on time-money tradeoffs for people in my reference class) so it's plausible younger EAs don't have to go through the same mental evolutions to get the same effect.

I'm going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades. 

problems like malaria and extreme poverty still exist

I know this isn't the only thing to track here, but it's worth noting that funding to GiveWell-recommended charities is also increasing fast, both from Open Philanthropy and from other donors. Enough so that last year GiveWell had more money to direct than room for more funding at the charities that meet their bar (which is "8x better than cash transfers", though of course money could be donated to things less effective than that). They're aiming to move 1 billion annually by 2025.

True, but GiveWell doesn't expect funding to grow at the same rate as top quality funding opportunities, so that $1bn/year is going to need further donors. Unless we believe GiveWell's top programmes/charities will never have a funding shortfall again, the point about where EA prioritises its funding still seems relevant.

Donating to AMF still seems like a good benchmark for cost effectiveness. Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities.

In the spirit of trying to really engage with the question and figure out ground truth, maybe it's worth making a quick CBA or guesstimate model based on your general views for "Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities" and then we can debate specifics and maybe come to better heuristics about this kind of thing. I'd be excited to see what numbers your intuition puts on things.

8Jack Lewars6mo
Completely agree. I will write something about this tomorrow

I've seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it's not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.

Also want to highlight "For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive" as what I think is the most clearly correct and actionable suggestion here.

I agree we should be careful with the "spend money to save time" guideline. It can be self-serving because spending time to save money can be unpleasant. 

Also, there is the danger that you get used to the luxury of spending money to save time. If your situation changes, or need to update your estimate of the value of your time to a lower value, you should be willing to spend the time and not the money! (I hope this does not happen to you, but it may happen e.g. you need to move to your career plan B/C/Z)

This also applies to other luxuries. 

2A_lark6mo
This is a valuable point.

Man, I find it so difficult (on, like, an emotional level) to think clearly about the dollar value of an hour of my time (I feel like it is overvalued?? because so many people make so much less money than me, a North American???) but I agree that adopting some kind of clear heuristic here is good, and that I should more frequently be doing explicit trades of "I will spend up to 2 hours on trying to find a cheaper option, because I think in expectation that's worth $60".

You might be aware of this but for others reading -  there's a calculator to help you work out the value of your time.

 I think it's worth doing once (and repeating when your circumstances change, e.g. new job), then just using that as a general heuristic to make time-money tradeoffs, rather than deliberating every time.

If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.

One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.

EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstr... (read more)

This is a very interesting point that, for me, reinforces the importance of keeping effective giving prominent in EA. It is both a good thing, and also a defence against accusations of self-serving wastefulness, if a lot of people in the community are voluntarily sacrificing some portion of their income (with the usual caveats about 'if you have actual disposable income).

GWWC, OFTW etc. may be doing EA an increasing favour by enlisting a decent proportion of the community to be altruistic.

It's also noticeable that giving seems to be least popular with longtermists, who also seem to be doing the most lavish spending.

Many people prominent in EA still donate very large percentages, Julia Wise (featured in Strangers Drowning)/Jeff Kaufman 50%, Will MacAskill at least 50%, probably the same for Peter Singer and Toby Ord.

I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable. 

It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.

At the time, I felt extremely uncomfortable / disappointed with the way the money was used. 

I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved - the optics were literally "rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat". 

I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don't think the obligations are any weaker than they were - we should just have a slightly lower cost effectiveness bar for funding things than before.

I had exactly the same thought in an identical-sounding situation. I felt incredibly uncomfortable, and someone at the party pointed out to me that these kinds of spending habits really alienate young EAs from less privileged backgrounds who aren’t used to ordering pricey food deliveries whenever they feel like it

I think that it is worth separating out two different potential problems here.

1. It is bad that we wasted money that could have directly helped people.
2. It is bad that we alienated people by spending money.

I am much more sympathetic to (2) than (1). 

Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.

3freedomandutility6mo
I think if we value longtermist/meta community building extremely highly, that’s actually a strong reason in favour of placing lots of value on that couple hundred of pounds - in this kind of scenario, a lot of the counterfactual use of the money would be using it usefully towards longtermist / meta community building.

I think another framing here is that: 

1) wasting hundreds of pounds of money on food is multiple orders of magnitude away from the biggest misallocation of money within EA community building, 

2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and 

3) it's pretty plausible that we burned much more utility from failure to donate/spend enough rather than via donating too much to wasteful things, so looking at the "visible" waste is ignoring the biggest source of resource misallocation. 

8freedomandutility6mo
Yeah I'd mostly agree with this framing. I don't mean to imply that this party was one of the worst instances in EA of money being wasted, just that I was there, felt pretty uncomfortable, optics were particularly bad (compared to donating to something not very effective), and it made me concerned about how EAs are valuing cost-effectiveness and counterfactuals.

I agree that it's important to not let the perfect be the enemy of the good, and it'd be bad to not criticize X just because X isn't the literal most biggest issue in the movement. But otoh some sense of scale is valuable (at least if we're considering the object level of resource misallocation and not just/primarily optics). 

Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.

Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.

This is probably a bit of an aside, but I don't think that is a valid way to argue about the value of time for people: It seems quite unlikely to me that instead of going to an EA party those people would actually have done productive work with a value of $100/h. You only have so many hours that you can actually do productive work and the counterfactual of going to this party would more likely be those people going to a (non-EA) party, going for dinner with friends, spending time with family, relaxing, etc than actually doing productive work.

5Thomas Kwa5mo
Even free time has value: maybe people would by default talk about work in their free time, or relax in a more optimal way than partying, thus making them more productive. So a suboptimal party can still waste lots of value in ways other than taking hours away from work. Given this, there are many people whose free time should be valued at >$100/h.
4Linch5mo
Fair point, that's a reasonable callout. I think elasticity here is likely between 0 and 1, so really you should apply some discount, say maybe 30% of the counterfactual is productive work time for example? So we get to >$30/h per person and >$15/min for the party, in the above Fermi. (As an aside, at least for me, I don't find EA parties particularly relaxing, except relatively small ones where I already know almost everybody)
6mic6mo
For what it's worth, even though I prioritize longtermist causes, reading made me fairly uncomfortable, even though I don't disagree with the substance of the comment, as well as

Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.

I think this kind of reasoning risks obscuring differences in cost-effectiveness between x-risk mitigation initiatives which do exist and which we should take seriously because of other counterfactual uses of the money and because we don’t have unlimited resources.

(There’s a chance I’m badly rephrasing complicated philosophy debates around fanaticism, pascals mugging, etc here but I’m not sure)

4Linch6mo
I agree with you that this is clearly dumb! I don't think calebp is making that mistake in the comment above however.
2freedomandutility5mo
Apologies if I misinterpreted calebp’s comment, but I would paraphrase it as “the expected value of a longtermist EA community building event is infinite, and remains infinite with £200 being wasted on uneaten food, so we shouldn’t worry about the lost expected value from overspending on food by £200.”
1Lorenzo Buonanno5mo
I think that is a pretty uncharitable view. I would say that it's obviously not viewed as "infinite", but orders of magnitude higher than £200. I'm sure calebp and most members of the community would definitely worry at £200,000 of wasted food.
6calebp6mo
I don't think this is right because there's aren't good mechanisms to convert money into utility. I don't think there are reasonable counterfactuals to this money that aren't already maxed out. That said f you can point to some actions that should get a few hundred pounds in Lt community building that aren't due to a lack of money and seem positive in EV, I'd be happy to fund these actions (in a personal capacity).
3freedomandutility6mo
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility. I also think it's very difficult for counterfactuals to become maxed out, especially in any form of community building. One concrete action - pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety). I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don't care about EA, to do the work.
7Greg_Colbourn5mo
Re the AI Safety dinners - seems like a cool project could just be hiring someone to full time coordinate facilitating such dinners: inviting people and grouping them, logistics, suggesting structures for discussion, inviting special guests etc. Is this something that's being worked on? Or is anyone interested in doing it? Wondering if there could be tie-in with the AGI Safety Fundamentals [https://www.eacambridge.org/agi-safety-fundamentals] course. e.g. the first step is inviting a broad range of people (~1000-10000) to a dinner event (that is held at multiple - ~100? - locations around the world within a week). Then those who are interested can sign up for the course (~1000).
2calebp6mo
I meant from a LT worldview. Have you tried this, I wouldn't predict this going very well. I also haven't heard of any community builders doing this (but I of course don't know all the community builders)? I agree that this kind of dinner could be a good use of funding but the specific scenario your described isn't obviously positive EV (at least to me). I'd worry about poorly communicating EA, low quality of conversation due to the average attendee not being very thoughtful (if the attendees are thoughtful then it is probably worth more CB time). You also need to worry about free dinners making us look weird (like in the OP). I think that promoting/inviting people that might make the event go well is going to require a CB as opposed to just a random person. Alternatively, the crux could be that we actually do have similar predictions of how the event would go and have different views on how valuable the event is at some level of quality. This is really far from my model of LT community building, I would love it if you were right though!
1freedomandutility6mo
Yeah it’s hard to tell whether we disagree on the value of the same quality of conversation or on what the expected quality of conversation is. Just to clarify though, I meant inviting people who are both already into EA and already into AI Safety, so there wouldn’t be a need to communicate EA to anyone. I also don’t actually know if anyone has tried something like this - I think it would be a good thing to try out.

I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don't think the obligations are any weaker than they were - we should just have a slightly lower cost effectiveness bar for funding things than before.

To me, the most important issue that this (and other comments here) raises is that, as a community, we don't yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn't.

6Denkenberger6mo
I realize the discussion here is broader than this specific case, but for this specific case, couldn't people have just taken the extra food home so it would not go to waste?
3freedomandutility6mo
Actually, yes that would have made a lot of sense, not sure why this didn’t happen.

Thanks for this clear write-up and as many others, I definitely share some of your worries. I liked it that you wrote that the extra influx of money could make the CB-position accessible to people from different socioeconomic backgrounds, since this point seems to be a bit neglected in EA discussions.

I think it is true for many other impactful career paths that decent wages and/or some financial security (e.g. smoothening career transitions with stipends) could help to widen the pool of potential applicants, e.g. to more people from less fortunate socioeconomic backgrounds. Don't forget that many people in the lower and lower-middle income class are raised with the idea that it is important to take care of your own financial security. I have plenty of anecdotes from people in that group that didn't pursue an EA career in the past, because the wage gap and the worries about financial insecurity were just too large. I see multiple advantages coming from widening the pool to people from lower / lower middle socioeconomic classes:

  1. Given that there is also a lot of talent in lower / lower middle socioeconomic classes, you will finally be able to attract more of them. This will increase t
... (read more)

Adding on: Increasing EA spending in certain areas could certainly support diversity, but it could have the opposite effect elsewhere.

I’m concerned that focusing community-building efforts at elite universities only increases inequality. I’m guessing that university groups do much of the recruiting for all-expenses-paid activities. In practice, then, students at elite universities will benefit, while students at state schools and community colleges won’t even hear about these opportunities. So the current EA community-building system quite accurately selects for privileged students to give money to.

Curious about any work to change this pattern!

7jared_m6mo
This is a great point. The good news is your concern is shared by CEA and others. It's very exciting to see the work that Jessica McCurdy at CEA [https://www.centreforeffectivealtruism.org/team] (and others) are doing to support the growth of EA groups at economically diverse R1 universities [https://en.wikipedia.org/wiki/List_of_research_universities_in_the_United_States#Universities_classified_as_%22R1:_Doctoral_Universities_%E2%80%93_Very_high_research_activity%22] and smaller colleges, etc. [https://en.wikipedia.org/wiki/List_of_research_universities_in_the_United_States#Universities_classified_as_%22R1:_Doctoral_Universities_%E2%80%93_Very_high_research_activity%22] EAIF has also funded a small project to try and support groups at so-called "Public Ivies" [https://en.wikipedia.org/wiki/Public_Ivy#Notable_updates] in the U.S., with a special focus on public honors colleges that can contribute to socioeconomic diversity in EA [https://www.cambridgescholars.com/resources/pdfs/978-1-5275-0636-7-sample.pdf]. Feel free to DM if you're interested in this broader opportunity area, whether in the context of North America / other OECD member countries - or in the context of other regions of the world!

Thanks for writing this! Especially agree with: "We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive."

I've had a good experience with framing decisions around (reasonable) costs not getting in the way of high-impact work — not only from the perspective of optics, but also as a heuristic for where to draw boundaries (e.g. where to draw the line on what salaries to offer).

Concrete example affecting me right now: this summer I’m considering internships in mental health, x-risk or global health cause prioritisation, and I’m also considering just doing a bunch of Coursera courses and working on a start up.

I think ideally I would be choosing entirely based on what offers more career capital / is more impactful, but it’s difficult not to be influenced by the fact that one of the internships would pay me £11k more than the other 3.

You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.

You should be influenced by that! It is evidence for donors thinking that org is more important, and that org thinking you are more important. Prices transmit valuable information.

I think for difficult questions it is helpful to form both an inside view (what do I think) and an outside view (what does everyone else think). Pay is an indicator of the outside view. In an altruistic market how good an indicator it is depends on how much you trust a few big grantmakers to be making good decisions. 

Ok, yes, but I think it’s a little more complicated than that, or we would all be working at Goldman or Google who also able to deploy altruistic narratives.

8Larks6mo
Yes, the scope is "Orgs whose donors you respect for their capital allocation." Goldman doesn't have donors at all.
3Charles He6mo
Yes you’re right (Goldman was bad/silly to bring up). But it seems good to make the main point: It’s possible and even ideal for salary to reflect impact. However, people have used outside salaries to explain differential salaries. These justifications are extremely convincing (even if it is self serving write this). (I don’t think you did this) but with the above justification, suggesting these norms are signals of impact risks leaning too hard on them. This might come off as slippery or wrong in certain situations.
5Will Greenman6mo
[Edit: The original version of this comment offered an idea that, as Mauricio flagged below, could be inconsistent with U.S. antitrust law. Thanks, Mauricio, for flagging my mistake. I retract the comment.]
9Will Greenman6mo
I wonder whether the exception for organized labor [https://www.law.cornell.edu/uscode/text/15/17] might apply in this context? Conspiring to suppress wages is clearly off-limits. But because the intention is to raise wages to a uniform base that makes all high-impact work similarly attractive, rather than to suppress wages, I'd be interested to explore whether workers could pursue the strategy above by forming a union and bargaining collectively with employers for a consistent contract. (I feel very uncertain of the feasibility of this idea -- Before pursuing this idea any further, I think it would be important learn more about constraints on collective bargaining with multiple employers for similar contracts, as well as any limits on funders' ability to encourage grantees to hire members of a union.)
1Tyner6mo
Maybe, does this apply to non-profits?

Yup - according to a 2016 FAQ from the two US agencies that enforce antitrust law (emphasis added):

You would likely violate antitrust law if you and the other nonprofit organizations agreed to decrease wages or limit future wage increases. [...] Your nonprofit organization and the others are competitors because you all compete for the same employees. It does not matter that your employer and the other organizations are not-for-profit; nonprofit organizations can be criminally or civilly liable for antitrust law violations.

I think a lot of points in this post are very valid and concerning to me. I hope they will be taken seriously.

4A_lark6mo
These points concern me too. When you say, I hope they will be taken seriously, I’m unsure who you have in mind. Taken seriously by who?

I guess mainly FTX, Open Philanthropy, EA Funds, and CEA. I've shared the article with relevant people in all of those.

Thank you very much for this post. I thought it was well-written and that the topic may be important, especially when it comes to epistemics.

I want to echo the comments that cost-effectiveness should still be considered. I have noticed  people (especially Bay Area longtermists) acting like almost anything that saves time or is at all connected to longtermism is a good use of money. As a result, money gets wasted because cheaper ways of creating the same impact are missed. For example, one time an EA offered to pay $140 of EA money (I think) for me for two long Uber rides so that we could meet up, since there wasn't a fast public transport link. The conversation turned out to be a 30-minute data-gathering task with set questions that worked fine when we did it on Zoom instead.

Something can have a very high value but a low price. I would pay a lot for potable liquid if I had to, but thanks to tap water that's not required, so I would be foolish to do so. In the example above, even if the value of the data were $140, the price  of getting it was lower than that. After taking into account the value of time spent finding cheaper alternatives, EAs should capture the surplus whe... (read more)

I'm worried that in some cases it might be the case that grant makers and grant receivers are friends who actively socialize with each other, and that might corrupt the grantmaking process. 

Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.

I disagree with you here. I think bring friends with someone makes you quite likely to overestimate their capabilities / reliability etc. If there’s psychology research available on how we evaluate people we know vs strangers, I’d love to read it.

6Emrik6mo
There's two opposing arguments: 1) You get more information about your friends than you get about strangers, and 2) you are more likely to be biased in favour of your friends. Personally, I think it would be very hard to vet potential funding prospects over just having a few talks, and the fact that I've "vetted" my friends over several years is a wealth of information that I would be foolish to ignore. Our intuitions on this may diverge based on how likely we think it is that we've acquired exceptional friends. If you're imagining childhood friends or college buddies, then I see why you would be skeptical. If on the other hand you're imagining the friends you've acquired from activities that you think only exceptional people would engage in, then that changes things.
3freedomandutility6mo
I think for funding a project, most of the important and relevant information about a person who might run the project can be obtained from a detailed CV. I thin most of the information that a funder could obtain about a friend which they couldn't also get from the friend's CV is their impression of difficult-to-accurately-evaluate things like personality traits. I place very little value on a funder's evaluation of these things because these things are inherently difficult to evaluate anyway and I expect their evaluation to be too heavily biased by their liking for their friend. Perhaps we disagree on the difficulty of evaluating personality traits, but I think we probably disagree on the extent to which liking someone as a friend is likely to bias your views on them. My view has long been that the bias is likely to be so large that funding applications should include CVs but not the names of people. I think many EAs feel like systems like these overvalue credentials, but that could easily be gotten round by excluding university names and focusing CVs more on 'track record of running cool projects'.

I think for funding a project, most of the important and relevant information about a person who might run the project can be obtained from a detailed CV. 

Wait what? Predictive validity of CVs is minimal for most jobs, one might naively guess that they ought to be even less predicative for funding entrepreneurial projects than for jobs. 

Why do you think companies rely on referrals more than on CVs? 

There are lots of ways to accurately predict a job applicant’s future success. See the meta-analysis linked below, which finds general mental ability tests, work trials, and structured interviews all to be more predictive of future overall job performance than unstructured interviews, peer ratings, or reference checks.

I’m not a grantmaker and there are certainly benefits to informal networking-based grants, but on the whole I wish EA grantmaking relied less on social connections to grantmakers and more on these kinds of objective evaluations.

Meta-analysis (>6000 citations): https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.1733&rep=rep1&type=pdf

1freedomandutility5mo
Do you mean like people referring potential employees to them or the references of job applicants? I wasn’t aware of what companies rely on more in recruitment. But yeah my model of recruitment was very evidence free and mostly based on my limited experiences in recruiting people for things, turns out my model of what’s most useful was basically the opposite of what some of the evidence says ( https://www.talentlens.com/content/dam/school/global/Global-Talentlens/uk/AboutUs/Whitepapers/White-Paper-The-Science-Behind-Predicting-Job-Performance-at-Recruitment.pdf [https://www.talentlens.com/content/dam/school/global/Global-Talentlens/uk/AboutUs/Whitepapers/White-Paper-The-Science-Behind-Predicting-Job-Performance-at-Recruitment.pdf] ), I was very surprised to see Peer Ratings being so useful for predicting performance.
2Linch5mo
I meant people who work in a company referring future potential employees.
1wANIEL4mo
This is probably a very important distinction for those reading the above comments for the first time. "Referral" might be a better word so as to distinguish from "reference" letters written by past supervisors.

I'm not sure if this perspective is helpful but this issue reminds me of a somewhat analogous situation in the  Financial Independence Retire Early (FIRE) movement.  Originally the focus was on drastically limiting spending, increasing the savings rate to as high as possible, and retiring shockingly young.  Then, as time passed some people realized they didn't want to live in such austerity.  Other people found that they could move things along faster by focusing on earning more, instead of spending less.  Then there were people who didn't really want to retire but more like get enough income to be comfortable and then downshift their lifestyles.  There were folks who just focused on making as much money as possible and remained in the community even though they were just about getting rich.  Then some people sort of stumbled into the movement having made a ton of money on cryptocurrency or Tesla options or whatever...they never really applied any of the principles but still retired early.

With all these changes in the demographics and mindsets of the community I've noticed that the subjects discussed and the behavior encouraged has notably changed... (read more)

I think what you’re describing is drift due to size (well in this case of FIRE, it actually might be drift due to experiences/values/maturity but let’s say due to size). The FIRE movement is “wide”. Maybe more appropriately, a subreddit like r/antiwork or r/superstonk is “wide”. 

These “wide” movements have a lot of people. They often have momentum and can coordinate. But it’s unclear what resources and actions they can take, beyond buying stock or something. Also, as you point out, they have the tendency to drift or break apart.

But EA can do something else, which is getting “tall”. $50B of funding is just the beginning and this money is the least interesting resource EA has. EA can accumulate other things of great value. I think it's hard to write out exactly what these resources are (because it's hard to know in advance or because I’m dumb) but they are probably related to institutions and talent. One example would be a powerful applied math group that solves ELK.

2Charles He6mo
One implication is that this “tallness” requires and justifies a strong, virtuous Leviathan “Center” that shepherds and adds to these resources. One role of the "Center" is to prevent systemic misuse, which doesn't look like stealing, but people being inside EA at some point, then leaving, taking away resources and not giving back (with the caveat that the departed resources could be used impactfully). The "Center" also needs to deal with other systemic issues, like entrenchment of individual/entities or uncollegial subcommunities. These resources also create dynamics that both prevent or increase drift/dilution. For example, these valuable EA resources aren’t just money, they are sticky to EA. So a robust trend is people being attracted by them and trying to learn EA values. This is good, but one issue is that you need to figure out this flow and integrate new leaders and people successfully. Another important dynamic might be the constant upgrading of talent. Ideally, EA talent should get better and better. Or at least be more experienced and have greater faculty. This creates a tension between existing cultures/groups and the flow of talent. Each of these dynamics puts pressure on and implies different roles for the “Center”. For example, upgrading of talent means there’s pressure for the “Center” to focus on virtue and governance and fostering people, instead of object level work, or even strategy to some degree. But the point of this story is that purposeless drift and dilution aren’t inevitable, and in fact are controllable by a good Center. The point of this control is the "tallness" or resources to execute effective altruism.

A few thoughts on how we could mitigate some of these risks:

  1. Have generous reimbursement policies at EA orgs but don't pay exorbitant salaries. 
    1. I think most EAs should value their time higher and be willing to trade money for time, and in these cases, I think you can justify a business expense. I think this will help clarify which spending choices are meant to actually boost productivity and which are just for fun. To be clear, I think spending some fraction of your income on just "fun" things like vacations, concerts, and eating out is fine in moderation. But to me at least, the shallow pond thought experiment is still basically true and there is plenty of need left in the world, even with the current funding situation. 
    2. I think we systematically overestimate how much spending more on personal consumption will make us happy/productive. I know plenty of people in finance/consulting/tech who have convinced themselves that they "need" to spend hundreds of thousands on personal consumption every year. I've lived in NYC on <$50K after taxes and donating for 4 years and feel like I've been able to do basically everything I want to do.
  2. Emphasize costly signals of altruism. 
    1. We should encourage people to take the GWWC pledge and go vegetarian/vegan because they're probably good things to do on their merits and because they signal a commitment to making a sacrifice to help others. 
9Imma5mo
Strong upvoted because of the clear distinction between productivity/business expenses and spending money for fun/personal consumption.

Consider the analogy with food production and food waste in relation to global hunger. We can grow enough food to feed the planet. Our ability to solve world hunger is not constrained by food production, but, in my understanding, by logistical issues involving waste, transportation, warfare, and governance problems.

Likewise, in EA, our ability to address the problems with which we are concerned may be increasingly unconstrained by funding. Instead, it's bottlenecked by similar logistics problems: waste, governance, coordination within and between organizations, the challenges of vetting grants, finding talent, building new organizations, and, as you are pointing out, optics. Can't blame lack of funding for your failures when you're no longer bottlenecked by funding!

It's important to understand that these optics and logistical problems are not a fluke, or the consequence of something we did wrong, but a natural consequence of growing to a certain size. It's just the next set of problems for us to solve.

Going forward, I would advocate for basing perceptions issues on legible evidence. I have no problem with this post, which does a good job of furthering a meaningful conversation. I n... (read more)

With the caveat that this is obviously flawed data because the sample is "people who came to an all-expenses-paid retreat," I think it's useful to provide some actual data Harvard EA collected at our spring retreat. I was slightly concerned that the spending would rub people the wrong way, so I included as one of our anonymous feedback questions, "How much did the spending of money at this retreat make you feel uncomfortable [on a scale of 1 to 10]?" All 18 survey answerers provided an answer. Mean: 3.1. Median: 3. Mode: 1. High: 9.

I think it's also worth noting that in response to the first question, "What did you think of the retreat overall?", nobody mentioned money, including the person who answered 9 (who said "Excellent arrangements, well thought out, meticulous planning"). On the question "Imagine you're on the team planning the next retreat, and it's the first meeting. Fill in the blank: "One thing I think we could improve from the last retreat is ____"," nobody volunteered spending less money; several suggestions involved adding things that would cost more money, including the person who answered 9, who suggested adding daily rapid tests. The question "Did participating in... (read more)

apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?

1levin5mo
Hmm, this does seem possible and maybe more than 50% likely. Reasons to think it might not be the case is that I know this person was fairly new to EA, not a longtermist, and somebody asked a clarifying question about this question that I think I answered in a clarifying way, but may not have clarified the direction of the scale. I don't know!
8AllAmericanBreakfast6mo
Acknowledging that important caveat, I am very pleased to have this counterbalancing data available. I hope that we can continue to gather more of it and get a better sense of how the EA movement and its social surroundings think about these questions over time. Thank you for collecting it.

Thanks for writing this post, this is an area I've also sometimes felt concerned about so it's great to see some serious discussion.

A related point that I haven't seen called out explicitly is that monetary costs are often correlated with other more significant, but less visible, costs such as staff time. While I think the substantial longtermist funding overhang really does mean we should spend more money, I think it's still very important that we scrutinize where that money is being spent. One example that I've seen crop up a few time is retreats or other events being organized at very short notice (e.g. less than two weeks). In most of these cases there's not been a clear reason why it needs to happen right now, and can't wait a month or so. There's a monetary cost to doing things last minute (e.g. more expensive flights and hotel rooms) but the biggest cost is the event will be less effective than if the organizers and attendees had more time to plan for it.

More generally I'm concerned that too much funding can have a detrimental effect on organisational culture. It's often possible to make a problem temporarily go away just by throwing money at it. Sometimes that's the right ... (read more)

3evhub5mo
Google, by contrast, is notoriously the opposite—for example emphasizing just trying lots of crazy, big, ambitious, expensive bets (e.g. their "10x" philosophy). Also see how Google talked about frugality in 2011 [https://bits.blogs.nytimes.com/2011/07/20/google-is-frugal-really-says-c-f-o/].
5AdamGleave5mo
Making bets on new ambitious projects doesn't seem necessarily at odds with frugality: you can still execute on them in a lean way, some things just really do take a big CapEx. Granted whether Google or any major tech company really does this is debatable, but I do think they tend to at least try to instill it, even if there is some inefficiency e.g. due to principal-agent problems.

My thoughts on this:

  1. I think because of the flow of money into EA, it feels like some people have updated towards cost-effectiveness and counterfactual reasoning being less important than before.
  2. I disagree with that view - I think that cost-effectiveness and counterfactual reasoning are exactly as important as they were before, the only change should be that our cost-effectiveness bar for funding things should be slightly lower. It remains true that small amounts of money from people in rich countries can dramatically raise someone's income via GiveDirectly, save a life through AMF or improve someone's subjective wellbeing via StrongMinds, so the obligation to be cost-effective remains very strong.
  3. I think not enough effort seems to go into estimating and optimising the cost-effectiveness of the community building side of EA, probably in part because this is difficult to do, highly uncertain and thus prone to motivated reasoning.
  4. But I think much more effort should go into estimating and optimising the cost-effectiveness of EA community building anyway. Some concrete examples I'd like to see - do we think we overspent / underspent on EAGxs and EAGs this year?

This post clearly articulates a lot of the related thoughts I've been having and discussing with other organizers; well done. I will add my quickly dashed off thoughts, coming in particular from the perspective of a EA group organizer:

1. The time/ money trade off is real, particularly for mostly volunteer-led groups where volunteer capacity is our main bottleneck.  Nonetheless, in my view being cognizant of trade offs when allocating resources is core to EA, and it is a real loss when we just vaguely gesture at the time/money trade off and spend money without really thinking deeply about its best use. I advocate taking a rule utilitarian approach to this -- even if in any given situation it might be more time that it is "worth" to really think hard about whether  spending funds on something is the best use of those funds--even within a more narrow framework like a group's overall goals--it is still worth doing as a rule. This also reinforces the norms of talking explicitly about trade offs, cause prioritization,  and thinking strategically.

2. This is anecdotal of course, but I have directly seen people express discomfort when our group spends money on, e.g., paying f... (read more)

Because of Evan's comment, I think that the signaling consideration here is another example of the following pattern:

Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).

The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).

A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”

Warren Buffett called his private jet 'The Indefensible' — then renamed it 'The Indispensable' after realizing it was worth the money.
Source

My suggestion would be that more people interested in Effective Altruism infrastructure donate to Giving What We Can instead of the E.A. Infrastructure Fund or CEA Community Building Fund. A community organized around effective giving is 1) better for optics; 2) better for us; 3) anecdotally, I was inducted into E.A. through global poverty, and then later got into longtermism and animal welfare by extension. Without good infrastructure and a strong culture of effective giving, E.A. will cease to be an excited and exciting (and growing) community working to solve the world's biggest problems, and will become simply a few eccentric billionaires weird AI risk pet project. 

Following the academic research closely as EAs often do produces many perspectives that are surprising to traditional activists. I'm a student at University of California Davis. Here my frugality is essential to getting my peers to take my perspectives on effectiveness seriously. If it wasn't for the frugality, they would dismiss me as not altruistic because I'm a moderate democrat instead of a socialist. I'm frugal because I believe it's the right thing to do (for me at least), not because of the optics. I don't know what the best answer is overall, but believe we should be particularly cautious about abandoning frugality in very left wing environments. Perhaps very different levels of frugality will be best in different communities.  

Even before a cost-benefit analysis, I'd like to see an ordinal ranking of priorities. For organizations like the CEA,  what would they do with a 20% budget increase? What would they cut if they had to reduce their budget by 20%? Same thing for specific events, like EAGs. For a student campus club, what would they do with $500 in funding? $2,000? $10,000? I think this type of analysis would be helpful for determining if some of the spending that appears more frivolous is actually the least important.

FWIW, I think it'd be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it's about maximising impact, and before we crack down on funding we should check our motivations. I don't want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.

I like the culture-aligning suggestions for other reasons, though. I think the long-term future will benefit from the EA community remaining aligned with actually caring about people.

With Asana's stock down 82% in the past six months, Meta down 43%,  and SBF's net worth cut in half in the past month, maybe the bigger worry should be a period of austerity and cutbacks?

I'm not sure if there's any data on this, but I think EAs do actually tend to come from well-off backgrounds. 

Because of that, I think a share (I'd guess like 15%?) of EA funding for career building for students and recent graduates doesn't actually have counterfactual impact and just provides funding for people to do stuff which they would have spent their own money on anyway. More money in EA will mean more money being used in this way.

Obviously, this wasted money is bad, because it's still important for us to be cost-effective and the counterfactual use is still AMF.

So I think we'd benefit from a strong norm against using EA funding for career building activities which people would have spent their own money on anyway.

I don't think we should retire the "do you think this would be a better use of money than giving it to AMF?" type thinking, we should keep it alongside "actually, yes, flow through effects could mean that this is a better use of money than giving it to AMF".

There's also probably a case for experimenting with means-testing for grants, which a lot of social initiatives use to focus their money on people who need it the most, which improves counterfactual cost-effectiveness.

7Andrea_Miotti6mo
Current (highly engaged) EAs mostly coming from well-off backgrounds can also be a good argument in favor of more funding for career building for students and recent graduates though. EAs from less-affluent backgrounds are those who benefit the most from career building and exploration funding, as they are the people most likely to face financial/other kinds of bottlenecks that prevent them from doing impactful stuff. Reducing career building funding will just reinforce the trend of only well-off EAs that can afford taking risks staying engaged, while EAs from less affluent backgrounds being more likely to drift out of the community/less likely to take riskier but more impactful career paths. As you say, the solution would be to effectively assess whether career building has counterfactual impact and ideally even fine-tuning the funding amount to specific circumstances, although that probably could lead to the development of weird and undesirable incentives on the applicants' side.
4freedomandutility6mo
Yes I agree with you with regards to amount of funding - one EA initiative I’d actually like to see is funding EA students from LMICs to go to the world’s best universities. And yes, my idea is more about fine-tuning the funding to go to people where the counterfactual impact is higher (another plus would be that less EA money is used up by wealthier people, freeing it up for less wealthy people). I think means-testing is fairly widely used (at least in the UK). I use it myself to selectively distribute products from my social enterprise towards kids from lower income backgrounds. I’m fairly confident that the downsides of means-testing - weird incentives, people trying to “game” the system and the indignity it makes some people feel, generally don’t outweigh the benefits of the better targeting of funding. And in the EA context, I think the benefits of better targeting funding will be larger than usual because of the cost effectiveness with which the saved EA money will be spent.

This is a great post, and I'm glad these points are being raised. I share a lot of the same concerns (basically, what happens to EA long term when it's just a good deal to join it?).

A big and small personal win from these changes in funding:

  1. I decided to launch a magazine reporting on what matters in the long-term in large part because of the change in funding situation and related calls for more ambition. I had the idea for doing this more than 3 years ago, but didn't pursue it. (We're aiming to launch in Mar 2023).
  2. In August, I quit my job at GiveDirectly to pursue freelance journalism full time, and planned to make basically no money for possibly 1-2 years. I cut a lot of costs to maximize my runway. A few months later, I got a job with an EA org that paid better than any job I had in the past. Now my time was scarce and money was not. I bought a free-standing dishwasher for ~$1000, which bought back ~45 minutes a day. I think this decision, and other smaller ones like it, were very good. 

But it's easy to get into self-serving territory where you value your time so highly that you can justify almost any expense (or don't think of cheaper ways to meet the same goals). This can also move us into territory where, to do ostensibly altruistic work, we don't give anything up, and, in fact, argue that others should give things to us. 

This feels fundamentally different from the movement that attracted me 5 years ago (though the reasoning is very consistent, and may well be right). 

Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the "EA needs megaprojects" thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there's a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.

EA spending is often perceived as wasteful and self-serving

It's interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one's identity and sense of companionship for an ascetic universalism.

I think the old perception is likely still more common, but it's probably a matter of time (which means there's likely still time to change it). And I think you described the tensions brilliantly.

Congrats on having the most upvoted EA Forum post of all time!

Free food and free conferences are things that are somewhat standard among various non-EA university groups. It's easy to object to whether they're an effective use of money, but I don't think they're excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it's less confusing why there is free stuff being offered. (Besides apparently being more appealing to students, I also genuinely think high-impact careers should be the focus of EA university groups.)

I'm in favor of making EA events and accommodation feel less fancy.

There are other expenses that I'd be more concerned about from an optics perspective about than free food and conferences.

You find out that if you build a longtermist group in your university, EA orgs will pay you for your time, fly you to conferences and hubs around the world and give you all the resources you could possibly make use of. This is basically the best deal that any student society can currently offer. Given this, how much time a

... (read more)

Maybe I missed this in a previous comment (or even the text itself, I just ctrl+f'ed it after skimming it) but one thing I think it could be worth spending more on is better working conditions (I think several EA orgs already to this well, but I would be surprised if there are no "laggards"). Think staffing projects properly so there is no burn-out, paid parental leave for both parents, childcare facilities near bigger offices,  properly paid internships, etc. Burn-out plagues the "making the world better" industry and I think we can attract a lot of ... (read more)

A lot of good points here.

A few thoughts on the benefits of a frugal community:

  • norms of frugality can help people avoid some of the consumeristic rat race of broader society. I don’t want EAs caught up in “keeping up with the Jones’s.” I want EAs keeping up with good ideas and good actions.
  • I think we want a community where someone who uses careful reasoning to take an impactful role for $60K/yr feels just as welcome in EA as someone who uses careful reasoning to take an impactful role for $160k/yr.

Not sure if this is in any way a valid perspective of looking at it:

I wonder how the big spending looks in the perspective of a small donor. Say, a person with a median income within a rich country who gives a 1-10 percent of their salary away.

I used to "earn-to-give" with a after-tax salary of 11 euros/hour. That's  a lot compared to the global average! This was enough to donate >10 percent. But my past self's hour worked could fund maybe a few minutes (?)  of a researcher (I don't know what EA researchers earn) - and it might have been ... (read more)

Thank you for writing this post; I know these take a lot of time and I think this was a really valuable contribution to the discourse/resonated strongly with me. 

I find it helpful get clearer about who the audience is in any given circumstance, what they most want/value and how money might help/hurt in reaching them. When you have a lot of money, it's tempting to use it as an incentive without noticing it's not what your audience actually most values. (And creates the danger of attracting the audience that does most value money, which we obviously don... (read more)

I think the point has been made in a few places that more money means lower barrier to entry and is an opportunity to reduce elitism in EA and I just wanted to add some nuance:

  • I think deploying money to literally make participation in the movement possible for more people is great (i.e. offering good salaries/healthcare/scholarships to people who would be barred from an event by finances).
  • On the other hand, I think excessive perks/fancy events etc.  are likely to be especially alienating for people who have close family members  struggling financially (this aligns with my own experience), so I worry that spending of this kind may actually make the movement feel less welcoming to people from a different socioeconomic background instead of more.

You point out it's difficult to control for "unilateralism". There isn't just one major funder but several, and each has many different areas and projects.

One thing that is more manageable and visible are "institutions" and culture around leadership:

  • I think there is a genuine culture of good leadership ("servant leadership"?) in older and more established EA institutions/funders
  • A lot of people right now in leadership and younger leader positions, seem to have given up higher income opportunities to be where they are
  • A lot of people are selected not just bec
... (read more)

Great post! This resonates a lot with me, and I'm happy the post has gotten a fair bit of attention. Anecdotally, this has increasingly become the part of EA I feel I have to answer for the most to outsiders these days.

A slightly related idea I've seen some success with — both in EA and elsewhere — is what I've come to think of as the reverse free lunch effect: When people get something fancy or expensive for free they tend to become aware they are being intized to be there. After all, there is no such thing as a free lunch and there might be an implicatio... (read more)

Thank you so much for this post. It eloquently captures concerns that I've increasingly heard from group members (e.g., I know a fairly-aligned member who wondered whether a retreat we were running was a "waste of CEA's money"). While I agree that the funding situation is a boon to the movement, I also agree that we should carefully consider its impact on optics/epistemics. I also think all your suggestions sound reasonable and I'd be really excited to see, for example,

  • a 'go-to' justification (ideally including a BOTEC) for spending money on events
  • more M&a
... (read more)

I wonder if it might be possible to get volunteers to help find some of opportunities to save money, in the genre of

putting students up in cheaper hotels, booking flights further in advance, or selecting cheaper flights where inconvenience is minimal (rather than treating money as no object).

I am not confident that this is true, because coordinating with volunteers is a lot of work and coordination-time is limited, but I could imagine a world where you could be like "here is my BATNA for booking flights for these speakers, if someone can improve upon this in the next 12 hours, I will donate the difference in money to the charity of their choice".

4Vaidehi Agarwalla6mo
You could outsource this to someone who saves more per hour worked would save more than the total cost of their time.
5Linch6mo
Easier said than done!
4Vaidehi Agarwalla6mo
True! But I think a good meta ops org could provide this kind of service for the community
8Linch6mo
I agree that this will be a good thing for a meta ops org to do, and I'd be excited to see a meta ops org! I suspect there might be even more valuable things that a meta ops org can do however (e.g. handle the legal and financial aspects of many orgs).

I suspect that this will be more of an issue for the global poverty part of the movement and less of an issue for the long-termist component of the movement.

8Dewi Erwan6mo
Why do you think it's less important for the x-risk/longtermism parts of the EA movement to have good PR and epistemics?

FWIW, Chris didn't say what you seem to be claiming he said

2Chris Leong6mo
It’s easier to justify for longtermism as the comparative in people’s minds is less likely to be people starving in Africa. And it’s less likely to come off as hypocritical. So the PR risk is more manageable. Epistemics is a risk though.

Maybe I'm misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).

But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don't think we can realistically make traction on existential risk because they haven't seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity - because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won't assume that the longtermist approach is the only one that'll actually work.

4Chris Leong6mo
Sorry, I think you might have actually misunderstood my point. I was talking about spending money on people working on global poverty vs. people working on longtermism, rather than spending money on global poverty vs longtermism. My point is that if you invest a lot of money in people working on global poverty, the question that arises is why aren’t you spending it on global poverty, while it’s hard to spend money on longtermism without spending it on people. In any case, people are more accepting of ai researchers bring paid large sums.
8Marisa6mo
That makes sense though I feel like this still applies. It's still not great optics to pay lots of money to people working on global poverty, but it's far from unheard of and, if there's concrete evidence that those people are having an impact then I think a lot of people would consider it justified. I think the reason it's acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.
3G Gordon Worley III6mo
Maybe I can help Chris explain his point here, because I came to the comments to say something similar. The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently. Neartermists are right to be worried about spending money on things that aren't clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying. Longtermists are right to not be too worried about spending money. There's astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn't matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn't matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment. That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors. This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.

I don't have anything smart or worthwhile to comment, but I want to say that I am glad you wrote this.

I'm quite uncomfortable with the idea that the best use of money is to give it to inexperienced young people from wealthy families who went to expensive schools. Helping privileged people get access to more privileged doesn't rank high on my personal list of cause areas, and I'm glad that someone is speaking out against this trend.

3A_lark6mo
I’m uncomfortable with this too, but more comfortable than I used to be. Privileged people have a lot of power/leverage in the world. That leverage can be squandered, used for selfish means, or used for good. If we think EAs have uniquely good ideas for identifying and solving neglected, pressing global problems, I want people with lots of leverage to learn from EA. The counterfactual is they use their leverage to do less altruistic or less effective things. I am willing to put money toward avoiding that.

Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives,  and I hope that this leads to more careful, but not less giving.

Thanks for writing this up!

This post does resonate with me, as when I was first introduced to EA, I was sceptical about the idea of "discussing the best ways to do good". This was because I wanted to volunteer rather than just talk about doing good (this was before I realised how much more impact I could have with my career/donations) and I think I would’ve been even more deterred if I’d heard that donated funds were being spent on my dinners.

However, it sounds like my attitude might have been quite different to others, reading the comments here. Also, I suspect I would’ve ended up becoming involved in EA either way as long as I heard about the core ideas.

I think a giga-donation ($1B+) or two to GiveDirectly will go a long way to improving optics (and - let us not forget - millions of lives!). In general, extravagant spending should  be matched with such donations.

There should be some “optimal” allocation of funding or best effort to find one.

If there are extravagances (wasteful high spending that is ex ante bad) we should reveal that here publicly and analyze and take actions so that it doesn’t happen again.

It doesn’t make sense to re allocate vast amounts of money to offset another bad act.

I strongly agree that one should focus on impact, not on offsetting. See Claire Zabel's post against offsetting

7Greg_Colbourn5mo
I'm not sure if offsetting is the right reference class. Maybe moral trade is more relevant? If we want broad support for --or at least to limit opposition to -- EA/Longtermism, we should also do things with broad appeal (that are still highly effective in absolute terms - e.g. GiveDirectly).
-4Greg_Colbourn5mo
The difficulty is in judging what is "wasteful". To many outsiders, six-figure salaries for non-profit work will be judged to be "wasteful" or extravagant regardless of whether or not it actually is (from a counterfactual, all things considered, EA standpoint). In terms of optics at least, present-day inequality is a big thing.
2Greg_Colbourn5mo
I think it's kind of ironic this has been downvoted, given a similar point is made in the (most up voted post of all time) OP; and it's a comment about optics. What are the downvoters' thoughts on optics?
3Charles He5mo
I didn't downvote this. I am guessing that the reasoning in the comment isn't "impact focused". Probably one of the key ideas EA brings is the ability to focus large resources on highly effective, impactful activities or projects or institutions, which sometimes involves high salaries or other high spending. This idea is criticized sometimes. But it often seems that these criticisms lack a vision/model/understanding of how highly effective people or projects operate and succeed. Another way of seeing this is to look at ineffective non-profits. I think that it's very unlikely that all the non-profits outside of EA are ineffective because everyone in them is dumb or unprincipled. Instead, it seems like people are caught in some sort of "Malthusian-like" trap. They often internalize beliefs where they have low spending and spend their limited time on bad activities that ultimately look like appeasement, and attend to social/political beliefs that don't go anywhere. This situation drives out talent and prevents critical long term planning.
4Greg_Colbourn5mo
Right, I get that, but I'm talking about the perception of EA (optics) as viewed from the outside (as is OP). Looking at the meta-level: will EA's impact be maximal if it is politically opposed? I'm playing devil's advocate: it looks a bit suspicious if we conclude that the best way to have an impact is mostly to pay already privileged people high salaries. Especially given global inequality (hence the suggestion of GiveDirectly). Why not be in a strong position to counter this by saying we're also taking significant steps to combat global poverty?

This post is excellent - thank you for writing and sharing. ❤️

Regarding this suggestion:

"Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind."

I think this would be hugely helpful, and that such a forum should be open and accessible to the rest of the EA community. I agree that SBF and Dustin+Cari have made amazing strides and are funding generally awesome things, but there's something unsettling about them being able to unilaterally move the needle... (read more)

A core issue with “voting” is that it’s not hard to change the voting pool (this is a whole other side to the coin no one has stirred everyone up with a post about, because I guess it’s less visceral than being infiltrated by stealthy predators). The incentives to change the voting pool would be so vast, and the institutional demands to regulate it are so large and don’t exist, that the system will collapse almost immediately.

9Stefan_Schubert5mo
I agree that that's a difficult issue. I also think that even if that could be solved, current decision-making processes lead to better decisions than this proposal would.