I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.
I also do a podcast about EA called Hear This Idea.
Copying a comment from Substack:If offence and defence both get faster, but all the relative speeds stay the same, I don’t see how that favours offence (e.g. we get ICBMs, but the same rocketry + guidance etc tech means missile defence gets faster at the same rate). But ideas like this make sense, e.g. if there are any fixed lags in defence (e.g. humans don’t get much faster at responding but need to be involved in defensive moves) then speed favours offence in that respect.
That is to say there could be a 'faster is different' effect, where in the AI case things might move too chaotically fast — faster than the human-friendly timescales of previous tech — to effectively defend. For instance, your model of cybersecurity might be a kind of cat-and-mouse game, where defenders are always on the back foot looking for exploits, but they patch them with a small (fixed) time lag. The lag might be insignificant historically, until the absolute lag begins to matter. Not sure I buy this though.
A related vague theme is that more powerful tech in some sense ‘turns up the volatility/variance’. And then maybe there’s some ‘risk of ruin’ asymmetry if you could dip below a point that’s irrecoverable, but can’t rise irrecoverably above a point. Going all in on such risky bets can still be good on expected value grounds, while also making it much more likely that you get wiped out, which is the thing at stake.
Also, embarassingly, I realise I don't have a very good sense of how exactly people operationalise the 'offence-defence balance'. One way could be something like 'cost to attacker of doing $1M of damage in equilibrium', or in terms of relative spending like Garfinkel and Dafoe do ("if investments into cybersecurity and into cyberattacks both double, should we expect successful attacks to become more or less feasible"). Or maybe something about the cost-per-attacker spending to hold on to some resource (or cost-per-defender spending to sieze it).
This is important because I don't currently know how to say that some technology is more or less defence-dominant than another, other than in a hand-wavery intuitive way. But in hand-wavey terms it sure seems like bioweapons are more offence-dominant than, say, fighter planes. Because it's already the case that you need to spend a lot of money to prevent most the damage someone could cause with not much money at all.
I see the AI stories — at least the ones I find most compelling — as being kinda openly idiosyncratic and unprecedented. The prior from previous new tech very much points against them, as you show. But the claim is just: yes, but we have stories about why things are different this time ¯\_(ツ)_/¯
What a great resource, thanks for putting it together!
Opinionated lists like this feel significantly more useful than comprehensive but unordered lists of relevant resources, because: (i) for most literatures, you're likely to get most of all the good insights from reading a small standout minority of everything written; and (ii) it's typically often not obvious to an outsider which resources are best in this respect. I hadn't heard of many of the books you rate highly.
Incidentally: consider reformatting the papers to not be headers? It makes the navigation bar feel cluttered to me.
Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.
(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")
Thanks for these details! I updated the relevant paragraph to include them.
I got a lot of value out of Guesstimate, and this (plus Squiggle itself) looks like a big step up. So thanks, and kudos!
(Also — both this new site and the Squiggle lang seem generally useful far beyond EA / x-risk contexts; e.g. for consultancies / policy planning / finance. I'd be interested to see if it catches on more widely.)
I also found no negative effects on my productivity.
This makes it sound to me like you think most the value comes from the health/fitness benefits of generally being less sedentary during a working day; and less to no value comes from potential benefits to focus or productivity (except insofar as they're downstream of being healthier). Is that a fair summary?
Thanks for the response.
You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don't tend to think of what they're doing as seeking out the most value for themselves or others. That sounds roughly right, but I don't think it follows that EA is best imagined or idealised as a kind of market. Though I'm not suggesting you claim that it does follow.
It also seems worth pointing out that in some sense there are literal markets for 'normal charity' interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most 'valuable' deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the "is this a market" test does not necessarily delineate your idealised version of EA from 'normal charity' alone. Again, not suggesting you make that exact claim, but I think it's worth getting clear on.
Instead, as you suggest, it's what the market is in that matters — in the case of EA we want a market for "things that do the most good". You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?
If so, I think I'm emphasising that descriptively the "...doing the most good" part may be more distinctive of the EA project than "EA is a market for..." Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I'm also hesitant to make the market analogy, like, the central guide to how EA should change.
Couple other points:
I still don't think the Hayekian motivation for markets carries over to the EA case, at least not as you've make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It's true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I'd say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I'm missing something here.
I agree that the fact people are aiming at value for others doesn't invalidate the analogy. Indeed, people buy things for other people in normal markets very often.
On your point about intervention, I guess I'm confused about what it means to 'intervene' in the market for doing the most good, and who is the 'we' doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?
You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I'd say my view is more that I'm just kinda confused about exactly what the market analogy prescribes, and as such I'm wary of using the market metaphor as a guide. I'd probably endorse some of the things you say it recommends.
However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don't think we need the market analogy to see their merit.
Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don't care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?
I notice that I'm getting confused when I try to make the market analogy especially well, but I do think there's something valuable to it.
Caveat that I skim-read up to "To what extent is EA functioning differently from this right now?", so may have missed important points, and also I'm writing quickly.
Claims inspired by the analogy which I agree with:
However, there are aspects of the analogy which still feel a bit confusing to me (after ~10 mins of thinking), such that I'd want to resist claims that in some sense this "market for ways to do the most good" analogy should be a or the central way to conceptualise what EA is about. In particular:
More generally, if the claim is that this market analogy should be a or the central way to conceptualise what EA is about, then I just feel like the analogy misses most of what's important. It captures how transactions work between donors and orgs, and how orgs compete for funding. But it seems to me that it matters at least as much to understand what people are doing inside those orgs — what they are working on, how they are reasoning about them, why they are working on them, how the donors choose what to fund, and so on. Makes me think of this Herbert Simon quote.
Hopefully some of that makes sense. I think it's likely I got some economics-y points wrong and look forward to being corrected on them.
Thanks for that link! Since writing this post I have become aware of a bunch of exciting BOTEC-adjacent projects, especially from speaking with Adam Binks from Sage / Quantified Intuitions.
I'm curious about how far you and Tom have come in working on this.
I'm not actively working with Tom on this project, but I could put you in touch.