Hide table of contents

The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.

I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.

[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]

Summary

  • We can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.
  • Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things. 
  • Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field. 
  • Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’). 
  • I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.
  • These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.
  • I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?” and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.
  • I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?” question. 

What is EA? A market for maximum-impact altruistic activities

What is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole. 

An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effective altruism community as a group of individuals participating in a marketplace for maximum impact philanthropic goods and services - an agora for altruists. 

This may initially seem strange, but effective altruism community is somewhat analogous to a local farmers' market. However, instead of the market stalls offering fruit, vegetables, and so on, the sellers are touting various charities, as well as research into those charities. There are buyers - donors - who are looking to get the best value for their money. There’s a social community that’s formed around the market. There are some people whose job it is to run the market, too. 

The motivation of the participants in each market may be different: people come to the 'EA market' for the good of others but go to the farmers' market for their own good. However, in many other respects, the markets function identically: buyers come to shop around and find the best deal for their money, and sellers are trying to sell as much of their goods as possible. 

One key practical difference is that, whilst you can inspect the quality of goods in the farmers’ market yourself (you can see if the fruit is rotten) buyers in the EA market can’t easily test or observe the goods they buy (you don’t see what happens when you, say, donate to AMF). Hence, arguably the key players in the EA market are the research organisations, which provide advice to participants, equivalent to consumer champions, such as Which? in the UK.

What, if anything, about EA is new? Of course, markets are old, as are charities and events around charities. Perhaps what’s new is that this is the first historical example of a marketplace for people explicitly committed to achieving the maximum impact for their altruistic resources, rather than just some impact: some combination of free market economics applied to philanthropy.[1] I suspect that simply creating a market for maximum impact altruism is an extremely important - and so far unrecognised - achievement of the community.

Considering EA as like a farmers’ market, you don’t have to look hard to identify equivalent structures. In EA, the marketplace itself is run by various arms of the Effective Ventures Foundation (EVF, which formerly used to be called, somewhat confusingly, CEA). CEA runs the EAG and EAGx conferences (the market days), the EA forum (the market noticeboard), the EA handbook (the market catalogue) and the community builder grants (market outreach). 80,000 Hours is the main advertiser for the marketplace. EAFunds is a platform for buyers. Giving What We Can is a buyer’s community. Longview is a club for large buyers that encourage particular purchases. And so on.

Whether or not you like the marketplace analogy, it’s pretty clear EA operates on a ‘hub-and-spoke’ model. There are some common bits that are widely participated in - the conference, the forum, CEA’s marketing, the social network - but otherwise there’s quite a lot of separation, with participants congregating by cause areas. 

If we see the EA community as a market, what follows about how it would work? 

It seems apt to understand effective altruists as engaging in a certain kind of economic activity. Given that, but that we aren't used to think of EA this way, what can we say about how the EA market will function? We'll come back to questions of how it should, morally-speaking, function in a moment.

One main takeaway is that we should conceive of effective altruists as functioning like any other economic agents, just with a rather unusual set of preferences. We should expect them to pursue their own self-interest. We should not expect them to be moral saints, or to be immune to ordinary human flaws and vices; this might sound obvious, but I’ve often got the impression that EAs think of themselves, and particularly their leaders, as fully selfless, perfectly rational, and so on: no, we’re all just people, squishy bags of flesh attached to bones, evolved from apes and living on the surface of a rock flying through space.

Something that follows from this first observation is that EA is not a collective project. We are not like an army, working together for the same goal, or, as Holden Karnofsky has suggested, the crew of a ship. People may want to ‘do the most good’, but have different conceptions of 'the good'. People with the same moral views might be analogous to the crew of a ship, but not the market as a whole. If you really want to keep the ship analogy, we're like multiple ships, with many people undecided on which ship to join or split between multiple ships.

In the market paradigm, the buyers and sellers don’t have, or need to have, much in common. The butcher, baker, and candlestick-maker each want to maximise profits for themselves, rather than the collective. As Adam Smith put it, relating to conventional markets, “It is not from the benevolence (kindness) of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest”.

It can be rational for participants to press for their own interests at the cost of others. Participants can and will believe that their products are superior; they may well think others are snake oil salesmen. Although the buyers all seek 'the best value for their money', as they see it, they do not think of 'best value' the same way. Indeed, that’s the point. There is a market because people have a diversity of preferences and they want a single place to go and shop - if you only want one item, there may be little point going to the market. Buyers and sellers with shared interests may group together to increase their negotiating power.

Perhaps the only thing the participants will, collectively, have in common is that they want there to be a market and, what’s more, for it to be run freely and fairly. As a whole, they’ll want to avoid corruption, nepotism, arbitrariness, dishonesty, etc. If you’re an apple seller, you don’t want your stall to be hidden at the back, whilst the other apple seller gets pride of place because he’s friends with the guy that runs the market. 

The shared aim of wanting a ‘free and fair market’ is fairly minimal, and desired only because it is in their individual interests. There isn’t some further, substantial goal all the participants need to share; it would be surprising if there was one.

As a consequence of their different interests, it will be rational for individual participants in either market to bend or break the rules in their favour, establish favourable or monopoly positions for themselves, and so on, if they can. This problem doesn’t go away in the EA marketplace. If anything, it’s sharpened: rather than trying to get the best deal for yourself (as you see it), you aim to get the best for others (as you see it), so there some moral reason to bend the rules in your (moral view’s) favour. Equally, participants may be annoyed if they see others abusing their power to promote their own interests.

How, in broad terms, should the EA market be organised? As a free market

I’ll discuss some specific changes for EA in the next section.

If we ask “how should a farmers’ market operate?”, an obvious, but not very informative answer, is “to bring about the best outcomes”. Fine, but how do we know what those are? Perhaps some social planner should decide what would be best for people, then force the market to produce those goods. That, indeed, is the approach preferred by socialists and, to a greater extent, communists. 

The standard answer, the one out of vanilla economic theory -and the one I imagine most EAs would agree to generally - is that there should be a free market. Individual consumers choose what they want, producers react to demand, and what is ultimately produced is decided by ‘the market’.  The free market is the means to the end of producing the best outcomes.

Given that the EA community seems to be a market, like any other, the presumption is that it should be structured like any other free market unless there are sufficiently good reasons to deviate from this. 

I don’t want to get into the minutiae of exactly how markets should be run. It’s not my speciality and I suppose that focusing on details will be distracting from the broad strokes.

I suspect some will object to this model: the purpose of EA is, in the end, to 'do the most good', not ‘operate as a free market’, so if we already know what the particular priorities are, shouldn’t we just push people towards those? Here, we need to be careful to distinguish both means from ends and, also, the perspective of individuals from the collective.

This situation of individuals thinking ‘I know what’s best’ is, of course, essential to the functioning of an ordinary free market: you want sellers to create and tout goods to customers, provide choice, compete over price, etc. because this allows consumers to get better deals. 

However, we still want sellers to compete in the free market, and to resist attempts by sellers to distort the market in their favour. There’s a reason governments have bodies that work to maintain competition and break up monopolies.

Given this, there’s an important distinction between the mindset of individual participants and the organisers of the market. The participants are entitled, indeed encouraged, to advocate for what they think is best (within the rules, whatever they are). But the organisers of the market should maintain a level playing-field, remain studiously impartial between these and only intervene to correct market inefficiencies (I leave open how much correction is warranted). It's through competition that the buyers get the best deals. Ultimately, the question of “who’s right?” should be determined by the participants in the market, not its organisers. You get the best outcomes because, not in spite, of their being a level playing field. You want a free market as the means of achieving good outcomes - it's not an end in itself.

Although it’s tempting for participants in the EA market to nudge the moral market in their favour, i.e. ensure more of their preferred goods are sold, doing so seems objectionably paternalistic. It combines arrogance - “I know better than others” - but also suggests insecurity - “I do not believe my ideas would win in a free market, so I must intervene”. If we think state interference, monopolies and so on, are bad in ordinary markets, we should object to them in high-impact philanthropic markets too.

Lurking in the background is a concept of fairness. We’re talking about morality, so don’t we need to decide on the correct theory of morality first to run the moral market? The obvious, if not perfectly simple, principle to appeal to here is Rawls’ Veil of Ignorance: how would you like the market to be run if you didn’t know who you would be in it? You could be a buyer or seller, and you could have few or many resources.

Given this, we can distinguish between whether some change to the structure of the market is good or bad for an individual from whether it promotes or reduces fairness. I suspect we should praise those who strive to make the effective altruism market fair, especially when this is counter to their narrow interests. 

The market itself will need to be organised by someone, and could well have various functions, such as advertising the market as a whole to potential customers, and deciding who gets which stalls. But it should not favour particular goods or services.

To what extent is EA functioning differently from this right now?

The effective altruism community is full of wonderful, smart, intelligent, thoughtful caring people - many of the best people I know. I find it a continued source of inspiration. My aim here, however, is to flag some ways in which effective altruism, as a system, could possibly work better. Although my intention is to highlight how things might be improved, I don’t want this to detract from the much excellent work that people do. My focus is on the overall functioning of the ‘market’. I don’t think I have much, if anything, to say about the behaviour of the small buyers and sellers. This is consistent with, indeed follows from, conceiving of it as a market and wanting it to run as a free-market. If you have a well-regulated market and lots of buyers and sellers, the result is a competitive market. Hence, you want to focus on removing the barrier to their being such a market. 

I see a few related problem areas.

(1) there is a big overlap between the administrators and the participants in the market.

CEA/EVF is the hub that coordinates the wider movement, but it, and its trustees, are also participants who advocate for particular things. Of the seven total trustees of the US and UK EVF boards, six are current or former 'buyers' or 'sellers' in the market. As far as I can tell, Tasha McCauley is the only one not directly involved in another effective altruism organisation. Claire Zabel, Nick Beckstead, Eli Rose, Nicole Ross and Zachary Robinson are all current or former Open Philanthropy employees. Nick Beckstead was the chair of the FTX Foundation. The final trustee is Will MacAskill, who wears more hats in effective altruism than it is easy to list. Aside from being participants in the markets, two of them (Nick and Will) are also major public proponents of a particular market 'outcome', namely longtermism (see Nick's thesis and Will's recent book). This is a bit like having the farmers' market mostly run by the orange sellers' association - arguably non-ideal. CEA promotes a particularly longtermism agenda, as does 80k. 

I would like to see CEA/EV acting as something like an impartial regulator, civil service, or public service broadcaster, for EA: it should try to impartially manage things, not take a stand. If you want to be outspoken, then that’s fine, but you should probably let someone else take the reins of organising the market. As the Bible puts it, no man can serve two masters. 

To be clear, I don't want to blame anyone for the status quo. A charitable and fairly plausible explanation is that the people most passionate about the market would also have views about what the priorities should be - the products it should produce. So the original participants became organisers. But as EA looks to the future, it seems preferable to move to a split. I imagine some organisers might welcome this change - it's awkward to act as something like a politician and a civil servant. 

(2) the central market structure lacks accountability to or oversight by its participants.

The farmers' market would be accountable to the local government and ultimately, the electorate. Conspicuously, the Effective Venture Foundation, which contains a number of charities, including the Centre for Effective Altruism (CEA), is accountable only to its trustees - and perhaps, practically, its donors - and has no democratic elements. This seems non-ideal. CEA, as its name indicates, to some large extent represents the EA movement as a whole, yet there is no formal representation of the EA movement in its decision-making. As the American revolutionaries might have said in this context, “no representation without representation” (rather than “no taxation without representation”).

(3) there’s only one major buyer. 

EA is effectively a monopsony. This is the opposite of a monopoly, that is, where there is only one seller. Open Philanthropy comprises something like 60% of the total purchasing power (from eyeballing Ben Todd's now out-of-date figures). I don't find monopsonies as intuitive as monopolies, but my understanding is that, just like monopolies are bad if you're a buyer, monopsonies are bad if you're a seller. Because there is only one buyer, you get reduced competition and innovation: organisations aren't incentivised to produce goods for other buyers, but instead to focus on what the large buyer wants. 

There's been discussion in EA about regranting from the major funders - Open Philanthropy and (formerly) FTX. I take this as recognition that there's an issue with funder concentration.

The combination of (1) - (3) means there’s no/little effective external scrutiny of how EA is run. Who has the incentive, or power, to say “hey guys, is this a great idea?’ who is not already an insider?

There may be lots of other issues too that someone more economically literate would spot.

What should be done?

The most obvious solution for (1) and (2) is to remove the overlap between participants and administrators. One way to do this is for CEA/EV to move to having a partially or fully elected set of trustees. These trustees should ideally not be both ‘poachers’ and ‘gamekeepers’. The central functions in effective altruism strive to be genuinely impartial. 

What would the electorate be? Contra Dustin Moskovitz, aka ‘The Sponsor’ who suggests defining the EA community is a ‘non-starter’, there are quite a few options: Giving What We Can members; people who have previously attended EAGs; CEA could become a fee-paying society like many others. I’m sure there are other options. Personally, I like the idea of GWWC members selecting the trustees of CEA: giving 10% is a costly signal and it gives people an incentive to take the pledge. I recognise it's too late to do this for the new round of EV/CEA trustees that are being recruited, so this is something to consider for the future.

CEA could be more accountable and allow for more ‘voice’ from the community, even without reforming its structures. One simple thing would be for the CEO(s) to have 'townhall' meeting at EAGs where people could (anonymously) ask them difficult questions.

I don’t have an easy solution to (3). I think it’s good there is one major buyer, rather than none, and I can’t simply magic up another billionaire. That said, if CEA had democratic elements, it would then have to balance the desires of its major funder with that of the electorate, rather than being so concerned with the former, which seems like progress.

One suggestion for funder concentration that gets mooted is regranting, e.g. Open Phil gives money to a bunch of other people to spend on its behalf. This doesn't seem like a very promising solution and we can see why in light of the market analogy: it’s basically like the rich local landowner giving a lot of money to his mates and telling them to buy things in the farmers’ market. If they buy the same things, nothing changes. If they buy different things, he’ll think it was a bad idea. Either way, it’s only a temporary spike in demand. Perhaps the better solution, if the major donor(s) want the market to exist, is to financially support the existence of an impartial market it does not itself run (I recognise someone might argue this already happens, to some degree). 

Possibly, if EA was run more conspicuously as an impartial market, it would attract and retain more large ‘buyers’.

Similarly, the econ-literate folk may spot obvious improvements that have not been obvious to me. 

How would all this have helped EA with its recent scandals?

I hope that all of the above seemed sensible without appealing directly to SBF, sexual misdemeanours, or racism. But I do think we can see how it would help, to some extent, with those.

SBF. A question we might ask is: why was Sam Bankman-Fried a problem for the EA community? What does it have to do with the wider community that some guy who identified as EA, and was a donor, seems to have committed an enormous crime? I think we can pick this apart by imagining alternatives.

Suppose a big buyer disappears from the marketplace. This would be unfortunate, but there’s no scandal.

Suppose there's criminality on the part of some people in the marketplace. This raises general issues of community trust, as well as specifically of policing, but it wouldn’t obviously rock things to the core. And we should expect it to happen. People are still people, even if they participate in the maximum altruism market.

Suppose a major buyer collapses, they are engaged in criminal activity, but their staff are also deeply involved in running the market. This is much more serious because it goes right to the heart of the market. It raises questions about whether the core staff were sufficiently attentive to the health and good running of the market as a whole. One perspective on why SBF-gate shocked EA to its very core was that the people working with SBF at his fund were the same key EAs who run the movement/market, and are key participants in it, eg Will MacAskill and Nick Beckstead. 

What if EA were organised along the lines suggested above, with a democratically infused CEA that was not a participant in the market?

For one, it would have contained the problem. Imagine FTX had collapsed, but its staff were just ‘ordinary’ EAs and not involved in running anything else. This would have been a much smaller problem, it not clearly so damning for EA as a whole.

For another, democratic elements would have improved scrutiny. With EA run as a very tight group, there was no one who had the incentives, or the power, to ask probing questions like “hey, is this guy legit? Should we really take his money?” I would absolutely have expected someone to ask those questions about SBF at anonymous townhalls, at which point the CEO of CEA would have to at least think about it. 

What about Nick Bostrom and his ‘old email’ case? This struck me as a relatively contained scandal exactly because Nick ‘just’ runs a ‘spoke’ in EA, the Future of Humanity Institute, but he doesn’t (also) have a role in the hub. If I'm an apple-seller, I don't think it has much to do with me if the chief cheese dealer has unpleasant views.

Finally, let’s turn to the scandals about sexual behaviour and consider the case of Owen Cotton-Barratt, the trustee of CEA who recently resigned. A couple of things stick out. 

One is that it's difficult to police the behaviour of your boss, which is what the community health team had to do. Saliently, the community health knew about the incidents, but didn't raise them to the whole board [Chana Messinger commented on a draft on this that the health team "did bring this to a UK board member" but not the whole board]. What would have happened if the trustees were elected - or at least there was greater scrutiny of CEA? Probably, the community health team would have been more accountable to the wider community and would have felt obliged to tell some or all of board who would, in turn, have felt obliged to tell the rest of the board and then act. At that point, the trustee would have made a public statement and/or resigned. They could still have stood for re-election, meaning the community could decide if the matters were sufficiently serious to merit not being re-elected. At least, it seems far less likely this story would have emerged, years later, in the international press, with an open question over whether the community health team had prioritised its boss over the community. 

The other is that Owen was, in addition to being a trustee of CEA, also a leading market participant in his role at the Future of Humanity Institute. As well as being non-ideal in itself, this arguably gave him additional influence that made it harder for the community health team to act.

Regarding the wider reports of EA community members being involved in sexual harassment, the complaint about the status quo is that there is a close-knit group of men who control the jobs and so speaking out is dangerous. But, again, if CEA were accountable to the community and separate from other organisations, it makes it much easier for people to raise issues with CEA, and for CEA to be in a stronger position to, say, warn or ban offenders of events. 

Closing thoughts

The effective altruism community began with a bunch of smart, idealistic, inexperienced young people about 10 years ago. It has become radically, unexpectedly, successful and then exploded into the public consciousness. Its original institutions were designed pretty much as you might expect a friendly, close-knit group to do, and they worked well when EA was small. But it's run into what we might politely call 'growing pains'. Now EA is big, we need - reluctantly - to grow up and develop new institutions that are fit for purpose. 

I think it’s possible to conceive of a new, better EA world emerging out of the SBF, and other crises, one set up well for its next stage of growth. To me, that looks like an impartial market centre - or hub - that promotes the general ideas of EA, maintains a level playing field, and keeps the community healthy. And it involves the ‘spokes’, the participants, developing and sharing their ideas for how to do the most good. What's more, it certainly helps me, and I think it might help others, to see their involvement in effective altruism as participating in a market, rather than as an all-encompassing part of their identity.

I also suspect that, if EA is going to survive its many crises and retain some credibility, it needs to make some changes. The attitude I’ve observed from some leading EA is, effectively, “SBF was a bad guy, we were unlucky, things are basically fine. Let’s wait for this to blow over”. I find that disappointing. As Winston Churchill once said, "never let a good crisis go to waste". I fear effective altruists are, collectively, letting this crisis go to waste. 

I’ve sketched some basic ideas and I’m curious to hear what people think about them. I’ve deliberately not got into the details of what a free market for maximum altruism would look like, both because that’s not my area and because I want people to focus on the big picture. But I would welcome economists and others to launch themselves at this task in due course.

  1. ^

    This might explain why effective altruism is treated with such suspicion by those who are sceptical of capitalism already.

Comments33
Sorted by Click to highlight new comments since: Today at 12:44 PM

Thanks for sharing this Michael! I briefly discussed the idea with some of my coworkers, and we aren't sure the argument goes through:

The arguments for free markets I usually hear are things like: if you make some assumptions about the market participants (e.g. perfect competition), then you can prove that the equilibrium price is optimal in some sense, and therefore distorting the market moves you from optimality. I think there is some metaphorical similarity between EA and a market, but it's not clear to me that the assumptions of these theorems are actually satisfied by EA.[1]

Maybe more importantly though: the theorems usually show optimality for market participants, but EA is not optimizing for EA's, we are optimizing for EA's beneficiaries. These people do not participate in the EA "market," and I don't know of any reason to think that market efficiency within EA would necessarily result in their welfare being correctly priced.

  1. ^

    And if there is already some market distortion, removing other market distortions might not help

While I agree that the market metaphor has some significant limitations here, I think there's a separate set of arguments for free(ish) markets that is based more on experience rather than theorems. In many cases, they work well on the whole at achieving the ends to which market participants are working (which are admittedly usually self-interested ends). And they also often result in participants creating value for people they do not even consciously intend to benefit (as in the Adam Smith quote). 

I'd also suggest that goodness-of-fit here should be evaluated in a relative sense. In light of experience, I'd submit that the base rate of more-centrally-controlled charitable governance structures effectively optimizing for the charitable endeavor's stated beneficiaries is pretty low. One could adjust that base rate upward based on a conclusion that the people running the centralized governance structures in EA are more capable/selfless/suitable than the median people running other charitable endeavors. However, if one did so, one would likely think the EA community is also more capable/selfless/suitable than the median group of people making decisions in decentralized charitable governance structures. That would call for  applying an upward adjustment to the base rate of decentralized governance approaches working well for stated beneficiaries, prior to evaluating the proposal presented in this post.

the base rate of more-centrally-controlled charitable governance structures effectively optimizing for the charitable endeavor's stated beneficiaries is pretty low.

I would be interested in your data set here; this doesn't seem obvious to me.[1] 

  1. ^

    I assume you mean to say that more centrally controlled charities are worse; if you're just saying that the base rate amongst both centrally and non-centrally controlled charities is low, then I agree.

I assume you mean to say that more centrally controlled charities are worse; if you're just saying that the base rate amongst both centrally and non-centrally controlled charities is low, then I agree.

I am just saying that without making an assertion that the central or non-central base rate is higher. My reference to low base rates among centrally-controlled charities was an attempt to explain that Michael's market metaphor could have some significant limitations and yet could potentially be superior to alternative governance approaches.

My own view (noted in a separate comment) is that the nature of the specific community infrastructure function plays a significant role in whether I would predict a centralized vs. decentralized approach to work better.

but it's not clear to me that the assumptions of these theorems are actually satisfied by EA

Definitely not. A small sample of the obstacles to applying the welfare theorems here:

  • Certain "trades" have very large negative externalities due to the risk of reputational harm
  • The largest few funders have huge amounts of market power.
  • We're extremely far from perfect information. "Consumers" don't even have good knowledge of their own utility functions in most cases.
  • EA is not a complete system of markets - the vast majority of possible charitable interventions are not on offer at any given time.
  • Perhaps most importantly: "sellers" aren't profit-maximizers 

But even if EA did approximate a perfectly competitive market, that would imply very little about its effectiveness. The first welfare theorem says that perfect competition gets you a Pareto-efficient outcome, but Pareto-efficient outcomes can be almost arbitrarily bad. "The king owns literally everything" is Pareto-efficient. The second welfare theorem is where all the oomph comes from: given perfect competition, you can reach any point on the Pareto-frontier - by redistributing resources and then letting the market reequilibrate. But EA is not a state and cannot carry out redistribution, so this gets us nothing.

Hello Ben and thanks for this!

As I said in my comment to Fin Moorhouse below, I'm not sure what difference it makes that market participants are buying for others in the EA market, but themselves in the normal market. Can you spell out what you take to the relevant feature, and what sort of 'market distortions' it specifically justifies? In both the EA and normal markets, are trying to get the 'best value', but will disagree over what that is and how it is to be achieved.

If the concern is about externalities, that seems to strongly count against intervening in the EA market. In normal markets, people don't account for externalities, that is, their effects on others. But in the EA market, people are explicitly trying to do the most good: they are looking to do the things that have the best result when you account for all the effects on everyone; in economics jargon, they are trying to internalise all those externalities themselves. Hence, in the EA market -distinctively from any other market(!) - there are no clear grounds for intervening on the basis of externalities.

Hi Michael, unfortunately it is late where I am so the clarity of my comment may suffer, but I feel like if I do not answer now I may forget later, so with perfect being the enemy of the good, I hope to produce good enough intuition pump for my disagreements:

  1. An example of a market where the buyer buys for others is the healthcare market, where insurances, hospitals, doctors, and patients all exist, patients buy insurances, insurances pay hospitals, which pay doctors (in the US doctors may work as small sole-traders within the hospital like a shop in a mall). As a result, a lot of market failure happens (moral hazard, adverse selection). In this case, you would have to model that each seller and buyer is perfectly informed, and perfectly able to communicate the needs of those they help, which is problematic. I know how much I need something, so I buy it, but here, a fund needs to guess how much donors wanted something improved, while a charity guesses how much improvement the recepients got, and then they meet in the middle? Troublesome; perhaps in this case you just pay the smartest charity people (a la Charity Entrepreneurship) and trust them to do the best they can, instead of spending energy competing with others to prove their worth.
  2. This brings in the problem of market distortions through advertising - whichever charity spends more on looking good to buyers can get more than an equivalent one who does not, so the equilibrium tends to go towards "advertising". This can be all sorts of signalling which creates noise.
  3. Good is hard to measure, and most things that are hard to measure have markets that end up in very inefficient equilibria (healthcare, education, public transport) and are better off being centrally regulated (to an extent) in many cases.
  4. Counting on the people in the market caring about externalities as you do above passes the buck, but is actually a vulnerability of the system - people who come there and do not care about externalities would then have better looking numbers. Also, humans are bad at noticing all of their externalities - I would hardly expect an AI safety researcher to be good at considering how much is the ecological footprint of their solution, or even to think about doing so. Instead, a regulatory body can set standards that have to be met, making it easier for sellers to know what they need to have in order to compete on the market. Free market is bad at solving this.

Hope these make sense, and serve as discussion points for further thinking! Let me know your thoughts on these, I am curious to better understand if this makes you update away from your position, or if you had thought of this in ways I did not fully grasp.

Thanks Michael!

CEA could be more accountable and allow for more ‘voice’ from the community, even without reforming its structures. One simple thing would be for the CEO(s) to have 'townhall' meeting at EAGs where people could (anonymously) ask them difficult questions.

I (Interim Managing Director of CEA) have been wondering if I should do a AMA. I would appreciate agree votes on this comment to indicate that you have questions you would ask me in an AMA. (And it would be even more useful if you actually asked the question in a comment here, but that's extra credit.)

(We do, in fact, have office hours at EAG's, and usually no one shows up. But maybe people are more interested in asking me questions on the Internet, I'm not sure.)

(there could be many reasons why this is the case - people at EAG have a high opportunity cost of attending office hours as there are always other lectures and 1-on-1s happening; also, it takes a certain level of affluence to attend an EAG - any that I would want to attend requires me to go through a one-to-two month visa process during which my passport is not with me, and which costs up to 50% of average salary in my country, not counting the airplane tickets and housing in some of the most expensive cities in the world where EAGs happen)

For extra credit: 

How important is to you pushing to open EA groups in countries where a lot of aid is going?

What kind of research is CEA doing into the counterfactual value of people doing community building?

What are you (personally) proudest of about CEA?

Hi Dušan
I work with Ben, as head of groups at CEA. If I could answer 

How important is to you pushing to open EA groups in countries where a lot of aid is going?

In general we've found it very difficult to "push" for opening an EA group. Running an impactful EA group requires a pretty high level of EA knowledge (alongside other skills) and trying to find an EA organizer, with that level of skill, in a country without an EA Group has historically proved difficult.

Instead we have prioritized having global platforms (e.g., Virtual Programs, EA Anywhere, and professional/affiliation based groups). Additionally when someone does wish to start a group we have support (e.g., resource centre, welcomer calls)

Not the main point of what you said, but there's a bit of a difference between the dynamic of one-on-ones discussion and a public forum.

Thank you for this Michael. I don't think I agree with the market metaphor, but I do think that EA is "letting this crisis go to waste" and that that is unfortunate. I'm glad you're drawing attention to it.

My thoughts are not well-formed, but I agree that the current setup—while it makes sense historically—is not well suited for the present. Like you, I think that it would be beneficial to have more of a separation between object-level organizations focusing on specific cause areas and central organizations that are basically public goods providers for object-level organizations. This will inevitably get complicated on the margins (e.g. central orgs would likely also focus on movement building and that will inevitably involve some opinionated choices re: cause areas), but I think that's less of an issue and still an improvement on the present.

Yeah, as I say, you don't need to buy the market metaphor - curious where you think it goes wrong though. All you really need to observe is

(1) that there are central 'hub' functions, where it makes sense to have one organisation providing these for the rest of the community (they are sort of mini natural monopolies) vs the various 'spokes' who focus on particular cause areas.

(2) you want there to be good mechanisms for making the central bits responsive to the needs of the community as a whole (vs focusing on a subgroup)

(3) it's unclear if we have (2).

Re: what goes wrong with the market metaphor: I mostly just think it raises all sorts of questions about whether or not the relevant assumptions hold to model this like an efficient market. Even if the answer is yes (and I'm skeptical), I think the fact that it pushes my (and seemingly other people's) thoughts there isn't idea. It feels like a distraction from the core issue you're pointing to.

I think this is probably better framed as a governance problem. I think you're asking institutions that provide public goods to the "spokes" or EA to not pick favourites and to be responsive to the community. I think that point can be made well without reference to an EA market or perfect competition. I prefer the phrasing in 1-2-3 in your reply.

Points taken. The reaction I'd have anticipated, if I'd just put it the way I did now, would be

(1) the point of EA is to do the most good (2) we, those who run the central functions of EA, need to decide what that is to know what to do (3) once we are confident of what "doing the most good" looks like, we should endeavour to push EA in that direction - rather than to be responsive to what others think, even if those others consider themselves parts of the EA community.

You might think it's obvious that the central bits of EA should not and would not 'pick favourites' but that's been a more or less overt goal for years. The market metaphor provides a rationale for resisting that approach.

Yeah, good points. You may well be right.

I think point 2 is highly questionable though. Just from an information aggregation POV, it seems like we should want key public goods providers to be open to all ideas and to do rather little to filter or privilege some ideas. For example, the forum should not elevate posts on animals or poverty or AI or whatever (and they don't). I've been upset with 80k for this.

I think HLI provides a good example of how this should be done. If you want to push EA in a direction, do that as a spoke and try to sway people to your spoke. "Capturing" a central hub is not how this should be done. I think having a norm against this would be helpful.

That said, I also unfortunately do not think the market metaphor is going to be convincing to people. I think concerns around monocultures and group-think might be more persuasive, but again I don't have very well-formed thoughts here. But I do think that if the goal of EA is to do the most good and we think there might be a cause x out there or we aren't confident that we have the right mix of resources across cause areas, then there is real value in having a norm where central public goods providers do not strongly advocate for specific causes.

finm
1y15
3
0

I notice that I'm getting confused when I try to make the market analogy especially well, but I do think there's something valuable to it.

Caveat that I skim-read up to "To what extent is EA functioning differently from this right now?", so may have missed important points, and also I'm writing quickly.

Claims inspired by the analogy which I agree with:

  • Various kinds of competition between EA-oriented orgs is good: competition for hires, competition for funding, and competition for kinds of reputation
    • And I think this is true roughly for the same reason that competition between for-profit firms is good: it imposes a pressure on orgs/firms to innovate to get some edge over their competitors, which causes the sector as a whole to innovate
    • I think it is also good to have some pressures to exist for orgs to fold, or at least to fold specific projects, when they're not having the impact they hoped for. When a firm folds, that's bad for its employees in the short run; but having an environment where the least productive firms can go bust can raise the average productivity of a firm
      • If you don't allow many projects to fail, that could mean (i) that the ecosystem is insufficiently risk-tolerant; or (ii) the ecosystem is inefficiently sustaining failed projects on life-support, in a way which wouldn't happen in a free market
      • Here's a commendable example of an org wrapping up a program because of disappointing empirical results. Seems good to celebrate stuff like this and make sure the incentives are there for such decisions to be made when best
    • More concretely: I don't think we need to always assume that it's not worth starting an org working on X if an org already exists to work on X (e.g. I think it's cool that Probably Good exists as well as 80k)
  • Many things that make standard markets inefficient are also bad for the EA ecosystem. You list "corruption, nepotism, arbitrariness, dishonesty" and those do all sound like things which shouldn't exist within EA
  • It would be good if there were more large donors of EA (largely because this would mean more money going to important causes)
  • It's often good for EA orgs which provide a service to other EA orgs to charge those orgs directly, rather than rely on grant money themselves to provide the service for free. And perhaps this should be more common
    • For roughly the same reason that centrally planned economies are worse than free markets at naturally scaling down services which aren't providing much value, and scaling up the opposite

However, there are aspects of the analogy which still feel a bit confusing to me (after ~10 mins of thinking), such that I'd want to resist claims that in some sense this "market for ways to do the most good" analogy should be a or the central way to conceptualise what EA is about. In particular:

  • As Ben West points out, the consumers in this analogy are not the beneficiaries. The Hayekian story about what makes markets indispensable involves a story about how they're indispensably good at aggregating preferences across many buyers and sellers, more effectively than any planner. But these stories don't go through in the analogous case, because the buyers (donors) are buying on behalf of others
    • Indeed, this is a major reason to expect that 'markets' for charitable interventions are inefficient with respect to actual impact, and thus a major insight behind EA!
    • Another complication is that in commissioning research rather than on-the-ground interventions, the donors are doing something like buying information to better inform their own preferences. I don't know how this maps onto the standard market case (maybe it does)
  • Seems to me that the EA case might be more analogous to a labour market than a product market (since donors are more like employers than people shopping at a farmers market). Much of the analogy goes through with this change but not all (e.g. labour supply curves are often kind of funky)
  • I'm less clear on why monopsony is bad specifically for reasons inspired by the market analogy. My impression of the major reason why monopsonies are bad is a bit different from yours —
    • Imagine there's one employer facing an upward-sloping labour supply curve and paying the same wage to everyone. Then the profit maximising wage for a monopsonist can be lower than the competitive equilibrium, leading to a deadweight loss (e.g. more unemployment and lower wages). And it's the deadweight loss that is the bad thing
    • But EA employers aren't maximising profit for themselves — they're mostly nonprofits!
    • You could make the analogy work better by treating profits for the donor as impact. I'm confused on exactly how you'd model this, and would be interested if someone who knew economics had thoughts. But it just seems intuitive to me that the analogous deadweight loss reason to avoid monopsony doesn't straightforwardly carry over (minimally, the impartial donor could just choose to pay the competitive wage)
  • Competitive Markets can involve some behaviour which is not directly productive, but does help companies get a leg-up on one another (such that many or all companies involved would prefer if that behaviour weren't an option for anyone). One example is advertising (advertising is useful for other reasons, I mostly have in mind "Pepsi vs Coke" style advertising). I don't like the idea of more of this kind of competitive advertising-type behaviour in EA
    • Edit: this is an example of imperfect competition, thanks to yefreitor for pointing out
  • Companies in competition won't share valuable proprietary information with one another for obvious reasons. But I think it's often really good that EA orgs share research insights and other kinds of advice, even when not sharing that information could have given the org that generated it a leg-up on other orgs
    • Indeed, I think this mutual supportiveness is a good feature of the EA community on the whole, and could account for some of its successes

More generally, if the claim is that this market analogy should be a or the central way to conceptualise what EA is about, then I just feel like the analogy misses most of what's important. It captures how transactions work between donors and orgs, and how orgs compete for funding. But it seems to me that it matters at least as much to understand what people are doing inside those orgs — what they are working on, how they are reasoning about them, why they are working on them, how the donors choose what to fund, and so on. Makes me think of this Herbert Simon quote

Hopefully some of that makes sense. I think it's likely I got some economics-y points wrong and look forward to being corrected on them.

Thanks for this. Reading this, and other comments, I don't think I'm managed to convey what I think could and should be distinctive about effective altruism. Let me try again!

In normal markets, people seek out the best value for themselves. 

In effective altruism (as I'm conceiving of it) people seek out the best value for others. In both cases, people can and will have different ideas of what 'value' means in practice; and in the EA market, people may also disagree over how to think of who the relevant 'others' are too.

Both of these contrast with the 'normal' charity world, where people seek out value for others, but there is little implicit or explicit attempt to seek out the best value for others; it's not something people have in mind. A major contribution of EA thinking is to point this out. 

The normal market and EA worlds thus have something in common that distinguishes them from the regular charity world. The point of the post is to think about how, given this commonality, the EA market should be structured to achieve the best outcomes for its participants; my claim is that this, given this similarity, the presumption is that the EA market and normal market should run along similar lines. 

If it helps, try to momentarily forget everything you know about the actual EA movement and ask "Okay, if wanted to design a 'maximum altruist marketplace' (MAM), a place people come to shop around for the best ways to use resources to help others, how would we do that?" Crucially, in MAM, just like a regular market, you don't want to assume that you, as the social planner, have a better idea of what people want than they do themselves. That's the Hayekian point (with apologies, I think you've got the wrong end of the stick here!)

Pace you and Ben West above, I don't think it invalidates the analogy that people are aiming at value for others, rather than themselves. There seems to be a background assumption of "well, because people are buying for others in the MAM, and they don't really know what others want, we (the social planner) should intervene". But notice you can say the same thing in normal markets - "people don't know really what they want, so we (the social planner) should intervene". Yet we are very reluctant to intervene in the latter case. So, presumably, we should be reluctant here too. 

Of course, we do think it's justified to intervene in normal markets to some degree (e.g. alcohol sales restricted by age), but each intervention needs to be justified. Not all interventions are justified. The conversation I would like to have regarding MAM is about which interventions are justified, and why.

I get the sense we're slightly speaking past each other. I am claiming (1) the maximum altruist market should exist, and then (2) suggesting how EA could be closer to that. It seems you, and maybe some others, are not sold on the value of (1): you'd rather focus on advocating for particular outcomes, and are indifferent about (1). Note it's analogous to someone saying "look, I don't care about whether there's a free market: I just want my company to be really successful." 

I can understand that many people won't care if a maximum altruism marketplace exists. I, for one, would like it to exist; it seems an important public good. I'd also like the central parts of the EA movement to fulfil that role, as they seem best placed to do it. If the EA movement (or, rather, its central parts) end up promoting very particular outcomes, then it loses much of what appeared to be distinctive about it, and it looks more like the rest of the charity world. 

Thanks for the response.

You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don't tend to think of what they're doing as seeking out the most value for themselves or others. That sounds roughly right, but I don't think it follows that EA is best imagined or idealised as a kind of market. Though I'm not suggesting you claim that it does follow.

It also seems worth pointing out that in some sense there are literal markets for 'normal charity' interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most 'valuable' deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the "is this a market" test does not necessarily delineate your idealised version of EA from 'normal charity' alone. Again, not suggesting you make that exact claim, but I think it's worth getting clear on.

Instead, as you suggest, it's what the market is in that matters — in the case of EA we want a market for "things that do the most good". You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?

If so, I think I'm emphasising that descriptively the "...doing the most good" part may be more distinctive of the EA project than "EA is a market for..." Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I'm also hesitant to make the market analogy, like, the central guide to how EA should change.

Couple other points:

I still don't think the Hayekian motivation for markets carries over to the EA case, at least not as you've make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It's true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I'd say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I'm missing something here.[1]

I agree that the fact people are aiming at value for others doesn't invalidate the analogy. Indeed, people buy things for other people in normal markets very often.

On your point about intervention, I guess I'm confused about what it means to 'intervene' in the market for doing the most good, and who is the 'we' doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?

You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I'd say my view is more that I'm just kinda confused about exactly what the market analogy prescribes, and as such I'm wary of using the market metaphor as a guide. I'd probably endorse some of the things you say it recommends.

However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don't think we need the market analogy to see their merit.

  1. ^

    Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don't care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?

Competitive markets can involve some behaviour which is not directly productive, but does help companies get a leg-up on one another (such that many or all companies involved would prefer if that behaviour weren't an option for anyone). One example is advertising (advertising is useful for other reasons, I mostly have in mind "Pepsi vs Coke" style advertising).

It's worth noting that this is a case of (and cause of) imperfect competition: perfect competition and the attendant efficiency results require the existence of perfect substitutes for any one producer's goods. 

I really appreciate this level of openness about possible changes, even though I disagree with almost every suggestion made here. I think that EA is chronically lacking in coordination and centralized leadership, and that its primary failures of late (obsessive self-flagellation, complete panic over minor incidents) could be resolved by a more coordinated strategy. As such, I feel that the "market" structure will collapse in on itself fairly quickly if we do not fix our organizational culture to stop panic spirals.

However, I do have a suggestion for resolving the monopsony issue. CEA and other movement-building organizations should focus large amounts of active fundraising effort on other billionaires (similarly to what many other charities do behind the scenes), and the community should become more supportive of earning-to-give (as many supposed "talent constraints" can in fact be resolved with enough hiring).

I agree with your second point wholeheartedly.

Could you give some examples of the panics over minor incidents?

The Bostrom email situation, and the Tegmark grant proposal situation, both seem very minor to me, at least compared to many other things that have happened to EA in the past with the same amount of panic or less.

Although I understand why you wanted to keep the discussion at a high level, I think various community-management functions are different in ways that affect the strength of your argument here. 

Certain functions should not, for various reasons, be duplicated. There probably should be only one Forum. It would generally be a bad idea for multiple organizations to try launching an EA group at a given university. Moreover, although funding for these kinds of functions is both essential and appreciated, the community often provides much or most of the value itself -- imagine a Forum without people "donating" their time to write and comment. The market metaphor works best for these types of functions, and the argument that market administrators should be both impartial and accountable to the community is strongest here. Using the Internet as a metaphor, Internet service providers that control the Internet backbone would be in this category, as would the Internet Engineering Task Force.

The analogy seems weaker for organizations like 80K. While 80K is in the business of promoting the sale of oranges, its existence doesn't meaningfully impede the ability of apple lovers to create a new organization (maybe -193.15C?) to promote the sale of apples. That 80K exists, and -193.15C doesn't, is primarily a function of donor preferences. Organizations of this type seem more like market participants than market regulators/controllers. I'd also put charity evaluators in this category.  With the Internet metaphor, most websites would fall into this category.

Functions like organizing EAGs fall somewhere in the middle. Conditional upon me winning the lottery, there's nothing stopping me from creating my own conference series. But conferences do involve network effects, so it's not accurate to say that the existence of EAGs does not meaningful impair my ability to start up my own series. I'd probably need to offer partial travel grants and post pictures of upscale meals . . . or something to overcome CEA's built-in network advantage here. Likewise, one could view the value created by EAGs as resulting from a more even mix of organizational/donor and community contributions than the Forum or 80K. So running global conferences is like running Facebook or Twitter in the Internet metaphor.

It's possible that CEA's current functions should be split between two organizations, one of which takes what one might call a market-coordinating or infrastructure role,[1] and the other of which is more like a market participant with a specific viewpoint. Conditional on it being a good idea for any organization to be seen as "speaking" for EA, it should be the infrastructure organization. I would prefer that the infrastructure organization adopt policies to prevent too much financial dependence on one donor.

As far as community governance of infrastructure -- much (but not all) of the past discussion about how to define the community for governance purposes (such as the linked comment from Dustin Moskovitz) has been in the context of funding allocation. Many of the challenges with community definition would be significantly attentuated in the context of community governance. Moreover, given that a major purpose of community institutions is to empower and equip members of the community to effectively do good, there are good reasons to think those members would be well-suited to selecting leaders who effectively accomplished that purpose.

 

  1. ^

    Although I use the term "infrastructure," tasks like technical development (as opposed to moderation or policy development) of the Forum could be handled by either organization.

I've just made a similar comment to Ryan. The central bits are more like natural monopolies. If you're running a farmers' market, you just want one venue that everyone comes to each market day, etc. The various stallholders and customers could set up a new market down the road, but it would be a huge pain and it's not something any individual wants to do.

Regarding 80k, they were originally set up to tell people about careers, then moved to advertising EA more broadly. That's a central function and in the market analogy, they were advertising the market, as a whole, to new customers and were sort of a 'market catalogue'. 80k then switched to promoting a subset of causes (i.e. products), specifically longtermism. I wasn't then, and still am not, wild about this for reasons I hope my post provides: it's perverse to have multiple organisations fulfilling the same function, in this case advertising the EA market, so when they switched to longtermism, it meant they left a gap that wasn't easy to fill. I understand that they wanted to promote particular causes, but hopefully they would appreciate that it meant they were aiding a sub-section of the community, and the remaining less-served subsections would feel disappointed by this. I think someone should be acting as a general advertiser for EA - perhaps a sort of public broadcast role, like the BBC has in the UK - and 80k would have been the obvious people to fulfil that role. Back to the farmers market analogy, if I sell apples, and the market advertiser decides to stop advertising fruit in its catalogue (or puts it in tiny pictures at the back), it puts the fruit sellers at a relative disadvantage.

As far as 80k, I think there are two dimensions here. First, I don't think "someone should be acting as a general advertiser for EA" implies that a particular organization must advertise for all of EA if it chooses to advertise at all. If no advertising organization existed, and then someone decided to advertise specific causes and/or to specific populations, I don't think anyone else would have standing to complain. That is, I wouldn't agree that it is generally "perverse to have multiple organisations fulfilling the" function of advertising. 

The second dimension is that 80k pivoted to primarily promoting longtermism, leaving "a gap that wasn't easy to fill" because it had been a general advertiser. I agree that posed a challenge. On the other hand, I think it's also problematic to tell an organization that if it decides to undertake a role on behalf of the whole community, it is locked into that role for life -- no matter what strategic direction its donors, leadership, and staff may feel is best down the road. 

It's not clear what role donors play in the market metaphor; perhaps they are orange and apple farmers? If it turned out that orange farmers [donors interested in longtermism] were paying for the most of the advertising, and orange vendors were doing most of the labor, it doesn't seem unreasonable for the advertising agency to announce a pivot. Telling orange farmers that they have to pay for a bunch of apple advocacy as the price of promoting oranges is going to reduce their willingness to pay for any advertising at all.

So what do we do when an organization was performing a role for the entire community but no longer wishes to do that? I think there has to be clear notice, a transition period, and a fair opportunity for the apple farmers/vendors to stand up their own organization to hawk apples. In other words, I think the community as a whole has a reasonable expectation that a broad service provider (whose role is not a natural monopoly by its nature) will continue to service the broader community in the short-to-medium run, and will carry out a transition fairly before disengaging in the medium-to-long run. 

I've generally supported CEA's general approach of trying to strike a balance between taking into account their own judgment and respecting the diversity of opinion within the community.

However, as AI timelines are starting to look shorter, I'm starting to move more toward the opposite side that you're on, where it's starting to feel more like EA should be treating our progress toward AGI as an emergency rather than continuing with business as usual.

I find the 'market' analogy helpful.  There is a range of possibilities from centrally lead to radically  pluralistic, and your free market suggestions show how a pluralistic approach may be implemented.  The market analogy highlights several ideas  - some central management should be cause neutral, competition can be helpful, the culture should be open to a marketplace of ideas. There is a role for leaders, but they need to be respectful of those working towards different views of how to do good. 

Thanks for sharing this post Michael. I thought it was well-written, timely, and made some very good points.[1] There's already been some discussion in the comments which I won't try to relitigate, so I'll share of couple of thoughts that stayed with me after I read your post.

  • I also agree that, absent the burst of activity following scandals or negative outside consequences over the last year, there hasn't been much public updating from EA organisations or leadership.[2] In particular, EAG London is coming up soon, and I only see one agenda item about these concerns,[3] I feel like this topic has been underprogrammed at EA conferences and I think that's a missed opportunity - though I'd welcome other perspectives.
  • I liked the marketplace analogy to frame the problem, but as you said I didn't hold too close to it. I think some of the other discussion did get a bit focused on that though aspect rather than the more important claim: asking whether current EA institutions are well-structured for what the movement needs right now, and if not what should change, by how much, and through what mechanism. These are the questions I think your essay raises that I'd love to see more community discussion on.

To overdo your analogy, if someone can point me in the direction of the right part of the farmer's market I'd appreciate it. I think these institutional questions are not going to go away, but I can't find the stall that's discussing them, and I don't think that I'm the best person to be squeezing 'improving EA institutions' lemonade from the 'EA institutional troubles' lemons.

  1. ^

    I've given it a strong-upvote for these reasons.

  2. ^

    Please point me in the direction of resources if you are aware of them.

  3. ^

    The 'Reflections on FTX' session, if anyone is wondering. I'll be attending it.

I like the analogy, and I found myself agreeing a lot with your suggestions, but I think there is a danger in this analogy:

It portrays EAs as functioning very individualistically, and this could ultimately be ineffective.

Imagine a market where many of the buyers want to make a cake (say the cake represents improving global health - other people may go to the market for other reasons, but many go for the same reason: in this case, cake). Each only has a little bit of money though, not enough to buy all the ingredients for a cake, so they buy one hoping that others will buy the others, and that they will be working off the same recipe. Inevitably, though, they have different ideas of the cake they want to make and what you end up with is a mess of different ingredients. The buyers DO get to eat something, but it isn't cake. What this represents in the real world is that EAs have targeted several individual health problems, but they haven't actually worked towards reducing global health/poverty as a whole.

Now imagine an alternative market where the buyers coordinate. They know they all want cake, so they discuss together what the best recipe would be. Then, when they have one, they organise who should buy what - and then together they are able to make a damn good cake. In the real world, this would mean tackling global poverty in a coordinated way, which involves addressing systemic change. 

What I'm pointing out is that if the buyers were to coordinate they might be able to do far more good. I think this is currently a big problem in EA, that we don't coordinate or think strategically enough. We focus on what individual donors can do, and thereby miss out on tackling the bigger problems. So, I like your analogy, but I don't want people to think that what we ought to have is a classic free market with self-interested actors. Rather, we should have a free market with (at least some) actors working for (at least some) common good(s).

In global health, one challenge is that there are a massive number of players, each with their own agendas. You've got developing countries, Western governments, Gates, traditional NGOs, EA, and many other players besides.

Only a small fraction of the funding is EA-aligned, so it's unclear how much benefit tighter EA coordination would bring. Moreover, my guess is that having so much of the EA funding routing through GiveWell has some coordinating effects (e.g., GW would likely know and react if two programs it recommended were duplicating efforts).

I think my argument will be even clearer if I talk about mitigating AI risk. Imagine if all AI safety orgs operated independently, even competing with each other. It would be (or arguably already is) a mess! There would be no 'open letter', just different people shouting separately. And surely AI safety could be advanced further if the already existing orgs worked together better.

So yes, choice is good, but to some degree we are and should be working towards common goals.

I agree with a succinct, workable change based on your recommendations:

(1) EVF board seat should be elected by GWWC members in good standing.

Curated and popular this week
Relevant opportunities