I do research at Longview Philanthropy
Previously Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.
I also do a podcast about EA called Hear This Idea.
I notice that I'm getting confused when I try to make the market analogy especially well, but I do think there's something valuable to it.
Caveat that I skim-read up to "To what extent is EA functioning differently from this right now?", so may have missed important points, and also I'm writing quickly.
Claims inspired by the analogy which I agree with:
However, there are aspects of the analogy which still feel a bit confusing to me (after ~10 mins of thinking), such that I'd want to resist claims that in some sense this "market for ways to do the most good" analogy should be a or the central way to conceptualise what EA is about. In particular:
More generally, if the claim is that this market analogy should be a or the central way to conceptualise what EA is about, then I just feel like the analogy misses most of what's important. It captures how transactions work between donors and orgs, and how orgs compete for funding. But it seems to me that it matters at least as much to understand what people are doing inside those orgs — what they are working on, how they are reasoning about them, why they are working on them, how the donors choose what to fund, and so on. Makes me think of this Herbert Simon quote.
Hopefully some of that makes sense. I think it's likely I got some economics-y points wrong and look forward to being corrected on them.
Thanks for that link! Since writing this post I have become aware of a bunch of exciting BOTEC-adjacent projects, especially from speaking with Adam Binks from Sage / Quantified Intuitions.
I'm curious about how far you and Tom have come in working on this.
I'm not actively working with Tom on this project, but I could put you in touch.
I think this is a good and important question. I also agree that humanity's predicament in 500 years is wildly unpredictable.
But there are some considerations that can guide our guess:
If you begin totally unsure whether the future is good or bad in expectation, then considerations like these might break the symmetry (while remaining entirely open to the possibility that the future is bad).
This post might also be useful; it recomplicates things by giving some considerations on the other side.
Looking forward to reading this. In the meantime, I notice that this post hasn't been linked and seems likely to be relevant:
Coherence arguments do not entail goal-directed behavior by Rohin Shah
I'd be pretty interested in an EA instance. If it were to happen then I guess it should happen soon, since it looks like a significant fraction of new accounts will be created in the next few weeks. Does anyone have expertise with this? I'd probably be able to provide some support in setting it up, but don't currently have the time to lead on doing this.
A list I'm considering for end-of-year donations, in no special order:
I'm also very interested in the best ways to help people affected by recent events, especially ways which are more scalable / accessible than supporting personal connections.
Sorry if I missed this in other comments, but one question I have is if there are ways for small donors to support projects or individuals in the short term who have been thrown into uncertainty by the FTX collapse (such as people who were planning on the assumption that they would be receiving a regrant). I suppose it would be possible to donate to Nonlinear's emergency funding pot, or just to something like the EAIF / LTFF / SFF.
But I'm imagining that a major bottleneck on supporting these affected projects is just having capacity to evaluate them all. So I wonder about some kind of initiative where affected projects can choose to put some details on a public register/spreadsheet (e.g. a description of the project, how they've been affected, what amount of funding they're looking for, contact details). Then small donors can look through the register and evaluate projects which fit their areas of interest / experience, and reach out to them individually. It could be a living spreadsheet where entries are updated if their plans change or they receive funding. And maybe there could be some way for donors to coordinate around funding particular projects that they individually each donor couldn't afford to fund, and which wouldn't run without some threshold amount. E.g. donors themselves could flag that they'd consider pitching in on some project if others were also interested.
A more sophisticated version of this could involve small donors putting donations into some kind of escrow managed by a trusted party that donates on people's behalf, and that trusted party shares donors on information about projects affected by FTX. That would help maintain some privacy / anonymity if some projects would prefer that, but at administrative cost. I'd guess this idea is too much work given the time-sensitivity of everything.
An 80-20 version is just to set up a form similar to Nonlinear's, but which feeds into a database which everyone can see, for projects happy to publicly share that they are seeking shortish-term funding to stay afloat / make good on their plans. Then small donors can reach out at their discretion. If this worked, then it might be a way to help 'funge' not just the money but also the time of grant evaluators at grantmaking orgs (and similar) which is spent evaluating small projects. It could also be a chance to support projects that you feel especially strongly about (and suspect that major grant evaluators won't share your level of interest).
I'm not sure how to feel about this idea overall. In particular, I feel misgivings about the public and uncoordinated nature of the whole thing, and also about the fact that typically it's a better division of labour for small donors to follow the recommendations of experienced grant investigators/evaluators. Decisions about who to fund, especially in times like these, are often very difficult and sensitive, and I worry about weird dynamics if they're made public.
Curious about people's thoughts, and I'd be happy to make this a shortform or post in the effective giving sub-forum if that seems useful.
Thanks for the response.
You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don't tend to think of what they're doing as seeking out the most value for themselves or others. That sounds roughly right, but I don't think it follows that EA is best imagined or idealised as a kind of market. Though I'm not suggesting you claim that it does follow.
It also seems worth pointing out that in some sense there are literal markets for 'normal charity' interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most 'valuable' deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the "is this a market" test does not necessarily delineate your idealised version of EA from 'normal charity' alone. Again, not suggesting you make that exact claim, but I think it's worth getting clear on.
Instead, as you suggest, it's what the market is in that matters — in the case of EA we want a market for "things that do the most good". You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?
If so, I think I'm emphasising that descriptively the "...doing the most good" part may be more distinctive of the EA project than "EA is a market for..." Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I'm also hesitant to make the market analogy, like, the central guide to how EA should change.
Couple other points:
I still don't think the Hayekian motivation for markets carries over to the EA case, at least not as you've make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It's true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I'd say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I'm missing something here.[1]
I agree that the fact people are aiming at value for others doesn't invalidate the analogy. Indeed, people buy things for other people in normal markets very often.
On your point about intervention, I guess I'm confused about what it means to 'intervene' in the market for doing the most good, and who is the 'we' doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?
You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I'd say my view is more that I'm just kinda confused about exactly what the market analogy prescribes, and as such I'm wary of using the market metaphor as a guide. I'd probably endorse some of the things you say it recommends.
However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don't think we need the market analogy to see their merit.
Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don't care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?