Now: Independent study; Radio Bostrom; Parfit Archive.
New: ✨ Comment Helper for Google Docs. ✨
Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. Read my CV.
Also: Inbox When Ready; The Valmy.
(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)
Thoughts on “maximisation is perilous”:
(1) We could put more emphasis on the idea of “two-thirds utilitarianism”.
(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.
Thank you (again) for this.
I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.
As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there's lots of convergence, and in practice your ranked list of interventions won't change much (even if the diff between them does... after you adjust for cluelessness, Pascal's mugging, etc).
Some worry that if you're a fan of longtermist axiology then this approach to comms is disingenous. I strongly disagree: it's normal to start your comms by finding common ground, and elaborate on your full reasoning later on.
Andrew Leigh MP seems to agree. Here's the blurb from his recent book, "What's The Worst That Could Happen?":
Did you know that you’re more likely to die from a catastrophe than in a car crash? The odds that a typical US resident will die from a catastrophic event—for example, nuclear war, bioterrorism, or out-of-control artificial intelligence—have been estimated at 1 in 6. That’s fifteen times more likely than a fatal car crash and thirty-one times more likely than being murdered. In What’s the Worst That Could Happen?, Andrew Leigh looks at catastrophic risks and how to mitigate them, arguing provocatively that the rise of populist politics makes catastrophe more likely.
Thanks for the post.
I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).
These amounts are small.
Let's say the value of your time is $500 / hour.
I'm not sure it was worth taking the time to think this through so carefully.
To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets.
Agree.
By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal
...
I won't dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.
Honestly, if someone told me they'd done this, my first thought would be "huh, they've taken their eye off the ball". My second would be "uh oh, they think it's a good idea to talk about ethical offsetting".
I think it's worth pricing in the possibility of reactions like this when reflecting on whether to take small actions like this for the purpose of signalling.
+1 to Geoffrey here.
I still think of EA as a youth movement, though this label is gradually fading as the "founding cohort" matures.
It a trope that the youth are sometimes too quick to dismiss the wiser counsel of their elders.
I've witnessed many cases where, to my mind, people were (admirably) looking for good explicit arguments that they can easily understand, but (regrettably) forgetting that things like inferential distance sometimes make it hard to understand the views of people who are wiser or more expert than you are.
I'm sure I've made this mistake too. That said: my intellectual style is fairly slow and conservative compared to many of my peers, and I'm often happy to trust inarticulate holistic judgements over apparently solid explicit arguments. These traits insulate somewhat me from this youthful failure mode, though they expose me to similarly grave errors in other directions :/
The CLTR Future Proof report has influenced UK government policy at the highest levels.
E.g. The UK "National AI Strategy ends with a section on AGI risk, and says that the Office for AI should pay attention to this.
If you think the UN matters, then this seems good:
On September 10th 2021, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks.
What matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief.
This is a good expression of the crux.
For many people—including many philosophers—it seems odd to think that questions of justification have nothing to do with us and our origins.
This is why the question of "what are we doing, when we do philosophy?" is so important.
The pragmatist-naturalist perspective says something like:
We are clever beasts on an unremarkable planet orbiting an unremarkable star, etc. Over the long run, the patterns of thought we call justified are those which are adaptive (or are spandrels along for the ride).
To be clear: this perspective is compatible with having fruitful conversations about the norms of morality, scientific enquiry, and all the rest.
I took the Gell-Mann amnesia interpretation and just concluded that he's probably being daft more often in areas I don't know so much about.
This is what Cowen was doing with his original remark.
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.
A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)
As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
I’m glad you shared the J.S. Mill quote.
EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.
In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)