finm

Researcher @ Longview Philanthropy
2331 karmaJoined Apr 2019Working (0-5 years)Oxford, UK
www.finmoorhouse.com/writing

Bio

I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.

I also do a podcast about EA called Hear This Idea.

www.finmoorhouse.com/writing

www.hearthisidea.com

Posts
35

Sorted by New
3
finm
· 3y ago · 1m read
187
finm
· 2mo ago · 7m read
187
finm
· 7mo ago · 20m read
67
finm
· 9mo ago · 29m read
29
finm
· 10mo ago · 9m read
77
finm
· 1y ago · 13m read

Comments
126

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Thanks for these details! I updated the relevant paragraph to include them.

I got a lot of value out of Guesstimate, and this (plus Squiggle itself) looks like a big step up. So thanks, and kudos!

(Also — both this new site and the Squiggle lang seem generally useful far beyond EA / x-risk contexts; e.g. for consultancies / policy planning / finance. I'd be interested to see if it catches on more widely.)

I also found no negative effects on my productivity.

This makes it sound to me like you think most the value comes from the health/fitness benefits of generally being less sedentary during a working day; and less to no value comes from potential benefits to focus or productivity (except insofar as they're downstream of being healthier). Is that a fair summary?

Thanks for the response.

You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don't tend to think of what they're doing as seeking out the most value for themselves or others. That sounds roughly right, but I don't think it follows that EA is best imagined or idealised as a kind of market. Though I'm not suggesting you claim that it does follow.

It also seems worth pointing out that in some sense there are literal markets for 'normal charity' interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most 'valuable' deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the "is this a market" test does not necessarily delineate your idealised version of EA from 'normal charity' alone. Again, not suggesting you make that exact claim, but I think it's worth getting clear on.

Instead, as you suggest, it's what the market is in that matters — in the case of EA we want a market for "things that do the most good". You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?

If so, I think I'm emphasising that descriptively the "...doing the most good" part may be more distinctive of the EA project than "EA is a market for..." Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I'm also hesitant to make the market analogy, like, the central guide to how EA should change.

Couple other points:

I still don't think the Hayekian motivation for markets carries over to the EA case, at least not as you've make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It's true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I'd say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I'm missing something here.[1]

I agree that the fact people are aiming at value for others doesn't invalidate the analogy. Indeed, people buy things for other people in normal markets very often.

On your point about intervention, I guess I'm confused about what it means to 'intervene' in the market for doing the most good, and who is the 'we' doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?

You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I'd say my view is more that I'm just kinda confused about exactly what the market analogy prescribes, and as such I'm wary of using the market metaphor as a guide. I'd probably endorse some of the things you say it recommends.

However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don't think we need the market analogy to see their merit.

  1. ^

    Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don't care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?

I notice that I'm getting confused when I try to make the market analogy especially well, but I do think there's something valuable to it.

Caveat that I skim-read up to "To what extent is EA functioning differently from this right now?", so may have missed important points, and also I'm writing quickly.

Claims inspired by the analogy which I agree with:

  • Various kinds of competition between EA-oriented orgs is good: competition for hires, competition for funding, and competition for kinds of reputation
    • And I think this is true roughly for the same reason that competition between for-profit firms is good: it imposes a pressure on orgs/firms to innovate to get some edge over their competitors, which causes the sector as a whole to innovate
    • I think it is also good to have some pressures to exist for orgs to fold, or at least to fold specific projects, when they're not having the impact they hoped for. When a firm folds, that's bad for its employees in the short run; but having an environment where the least productive firms can go bust can raise the average productivity of a firm
      • If you don't allow many projects to fail, that could mean (i) that the ecosystem is insufficiently risk-tolerant; or (ii) the ecosystem is inefficiently sustaining failed projects on life-support, in a way which wouldn't happen in a free market
      • Here's a commendable example of an org wrapping up a program because of disappointing empirical results. Seems good to celebrate stuff like this and make sure the incentives are there for such decisions to be made when best
    • More concretely: I don't think we need to always assume that it's not worth starting an org working on X if an org already exists to work on X (e.g. I think it's cool that Probably Good exists as well as 80k)
  • Many things that make standard markets inefficient are also bad for the EA ecosystem. You list "corruption, nepotism, arbitrariness, dishonesty" and those do all sound like things which shouldn't exist within EA
  • It would be good if there were more large donors of EA (largely because this would mean more money going to important causes)
  • It's often good for EA orgs which provide a service to other EA orgs to charge those orgs directly, rather than rely on grant money themselves to provide the service for free. And perhaps this should be more common
    • For roughly the same reason that centrally planned economies are worse than free markets at naturally scaling down services which aren't providing much value, and scaling up the opposite

However, there are aspects of the analogy which still feel a bit confusing to me (after ~10 mins of thinking), such that I'd want to resist claims that in some sense this "market for ways to do the most good" analogy should be a or the central way to conceptualise what EA is about. In particular:

  • As Ben West points out, the consumers in this analogy are not the beneficiaries. The Hayekian story about what makes markets indispensable involves a story about how they're indispensably good at aggregating preferences across many buyers and sellers, more effectively than any planner. But these stories don't go through in the analogous case, because the buyers (donors) are buying on behalf of others
    • Indeed, this is a major reason to expect that 'markets' for charitable interventions are inefficient with respect to actual impact, and thus a major insight behind EA!
    • Another complication is that in commissioning research rather than on-the-ground interventions, the donors are doing something like buying information to better inform their own preferences. I don't know how this maps onto the standard market case (maybe it does)
  • Seems to me that the EA case might be more analogous to a labour market than a product market (since donors are more like employers than people shopping at a farmers market). Much of the analogy goes through with this change but not all (e.g. labour supply curves are often kind of funky)
  • I'm less clear on why monopsony is bad specifically for reasons inspired by the market analogy. My impression of the major reason why monopsonies are bad is a bit different from yours —
    • Imagine there's one employer facing an upward-sloping labour supply curve and paying the same wage to everyone. Then the profit maximising wage for a monopsonist can be lower than the competitive equilibrium, leading to a deadweight loss (e.g. more unemployment and lower wages). And it's the deadweight loss that is the bad thing
    • But EA employers aren't maximising profit for themselves — they're mostly nonprofits!
    • You could make the analogy work better by treating profits for the donor as impact. I'm confused on exactly how you'd model this, and would be interested if someone who knew economics had thoughts. But it just seems intuitive to me that the analogous deadweight loss reason to avoid monopsony doesn't straightforwardly carry over (minimally, the impartial donor could just choose to pay the competitive wage)
  • Competitive Markets can involve some behaviour which is not directly productive, but does help companies get a leg-up on one another (such that many or all companies involved would prefer if that behaviour weren't an option for anyone). One example is advertising (advertising is useful for other reasons, I mostly have in mind "Pepsi vs Coke" style advertising). I don't like the idea of more of this kind of competitive advertising-type behaviour in EA
    • Edit: this is an example of imperfect competition, thanks to yefreitor for pointing out
  • Companies in competition won't share valuable proprietary information with one another for obvious reasons. But I think it's often really good that EA orgs share research insights and other kinds of advice, even when not sharing that information could have given the org that generated it a leg-up on other orgs
    • Indeed, I think this mutual supportiveness is a good feature of the EA community on the whole, and could account for some of its successes

More generally, if the claim is that this market analogy should be a or the central way to conceptualise what EA is about, then I just feel like the analogy misses most of what's important. It captures how transactions work between donors and orgs, and how orgs compete for funding. But it seems to me that it matters at least as much to understand what people are doing inside those orgs — what they are working on, how they are reasoning about them, why they are working on them, how the donors choose what to fund, and so on. Makes me think of this Herbert Simon quote

Hopefully some of that makes sense. I think it's likely I got some economics-y points wrong and look forward to being corrected on them.

Thanks for that link! Since writing this post I have become aware of a bunch of exciting BOTEC-adjacent projects, especially from speaking with Adam Binks from Sage / Quantified Intuitions.

I'm curious about how far you and Tom have come in working on this.

I'm not actively working with Tom on this project, but I could put you in touch.

finm
8mo35
13
1

I think Ajeya and Kelsey are among the very best communicators (and researchers) on issues around AI alignment (e.g. 1, 2); so it's cool that you've joined forces for this. Excited for future posts!

Answer by finmFeb 25, 20239
1
0

I think this is a good and important question. I also agree that humanity's predicament in 500 years is wildly unpredictable.

But there are some considerations that can guide our guess:

  • Almost everyone wants to improve their own lives; few people want to make their own lives worse for the sake of it
  • Some people want to improve the lives of others for the sake of it; few people want to harm others for the sake of it
  • Technological progress tends to enable people to get more of what they want; in this case the things that improve their lives
  • If humans are still around in 500 years, we should expect them to be more technologically advanced — since it seems easier to learn new capabilities than to entirely forget old ones

If you begin totally unsure whether the future is good or bad in expectation, then considerations like these might break the symmetry (while remaining entirely open to the possibility that the future is bad).

This post might also be useful; it recomplicates things by giving some considerations on the other side.

Load more