Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.
Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.
Thanks for asking! Still not entirely determined - I’ve been planning some time off over the winter, so I’ll revisit this in the new year.
I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.
I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response.
Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
Hi - thanks for writing this! A few things regarding your references to WWOTF:
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)
In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty - under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse. Again, the thrust of the relevant discussion happens in the section “The Case for Optimism": "on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure... If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads."
Of course, there's only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
It’s because we don’t get to control the price - that’s down to the publisher.
I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.
It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics and on the value of the future, are the chapters that are most similar to how I’d have written the book if it were written for EAs - but even so, they’d still have been pretty different.
Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.
Sorry - done, thanks!