I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.
I also do a podcast about EA called Hear This Idea.
(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")
I got a lot of value out of Guesstimate, and this (plus Squiggle itself) looks like a big step up. So thanks, and kudos!
(Also — both this new site and the Squiggle lang seem generally useful far beyond EA / x-risk contexts; e.g. for consultancies / policy planning / finance. I'd be interested to see if it catches on more widely.)
I also found no negative effects on my productivity.
This makes it sound to me like you think most the value comes from the health/fitness benefits of generally being less sedentary during a working day; and less to no value comes from potential benefits to focus or productivity (except insofar as they're downstream of being healthier). Is that a fair summary?
Thanks for the response.
You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don't tend to think of what they're doing as seeking out the most value for themselves or others. That sounds roughly right, but I don't think it follows that EA is best imagined or idealised as a kind of market. Though I'm not suggesting you claim that it does follow.
It also seems worth pointing out that in some sense there are literal markets for 'normal charity' interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most 'valuable' deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the "is this a market" test does not necessarily delineate your idealised version of EA from 'normal charity' alone. Again, not suggesting you make that exact claim, but I think it's worth getting clear on.
Instead, as you suggest, it's what the market is in that matters — in the case of EA we want a market for "things that do the most good". You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?
If so, I think I'm emphasising that descriptively the "...doing the most good" part may be more distinctive of the EA project than "EA is a market for..." Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I'm also hesitant to make the market analogy, like, the central guide to how EA should change.
Couple other points:
I still don't think the Hayekian motivation for markets carries over to the EA case, at least not as you've make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It's true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I'd say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I'm missing something here.[1]
I agree that the fact people are aiming at value for others doesn't invalidate the analogy. Indeed, people buy things for other people in normal markets very often.
On your point about intervention, I guess I'm confused about what it means to 'intervene' in the market for doing the most good, and who is the 'we' doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?
You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I'd say my view is more that I'm just kinda confused about exactly what the market analogy prescribes, and as such I'm wary of using the market metaphor as a guide. I'd probably endorse some of the things you say it recommends.
However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don't think we need the market analogy to see their merit.
Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don't care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?
I notice that I'm getting confused when I try to make the market analogy especially well, but I do think there's something valuable to it.
Caveat that I skim-read up to "To what extent is EA functioning differently from this right now?", so may have missed important points, and also I'm writing quickly.
Claims inspired by the analogy which I agree with:
However, there are aspects of the analogy which still feel a bit confusing to me (after ~10 mins of thinking), such that I'd want to resist claims that in some sense this "market for ways to do the most good" analogy should be a or the central way to conceptualise what EA is about. In particular:
More generally, if the claim is that this market analogy should be a or the central way to conceptualise what EA is about, then I just feel like the analogy misses most of what's important. It captures how transactions work between donors and orgs, and how orgs compete for funding. But it seems to me that it matters at least as much to understand what people are doing inside those orgs — what they are working on, how they are reasoning about them, why they are working on them, how the donors choose what to fund, and so on. Makes me think of this Herbert Simon quote.
Hopefully some of that makes sense. I think it's likely I got some economics-y points wrong and look forward to being corrected on them.
Thanks for that link! Since writing this post I have become aware of a bunch of exciting BOTEC-adjacent projects, especially from speaking with Adam Binks from Sage / Quantified Intuitions.
I'm curious about how far you and Tom have come in working on this.
I'm not actively working with Tom on this project, but I could put you in touch.
I think this is a good and important question. I also agree that humanity's predicament in 500 years is wildly unpredictable.
But there are some considerations that can guide our guess:
If you begin totally unsure whether the future is good or bad in expectation, then considerations like these might break the symmetry (while remaining entirely open to the possibility that the future is bad).
This post might also be useful; it recomplicates things by giving some considerations on the other side.
Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.