I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response.
Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
Hi - thanks for writing this! A few things regarding your references to WWOTF:
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)
In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty - under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse. Again, the thrust of the relevant discussion happens in the section “The Case for Optimism": "on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure... If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads."Of course, there's only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
It’s because we don’t get to control the price - that’s down to the publisher.I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.
It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics and on the value of the future, are the chapters that are most similar to how I’d have written the book if it were written for EAs - but even so, they’d still have been pretty different.
Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.
To be clear - these are a part of my non-EA life, not my EA life! I’m not sure if something similar would be a good idea to have as part of EA events - either way, I don’t think I can advise on that!
Some sorts of critical commentary are well worth engaging with (e.g. Keiran Setiya’s review of WWOTF); in other cases, where criticism is clearly misrepresentative or strawmanning, I think it’s often best not to engage.
I think it’s a combination of multiplicative factors. Very, very roughly:
To illustrate quantitatively (with normal weekly wellbeing on a +10 to -10 scale) with pretty made-up numbers, it feels like an average week used to be like: 1 days: +4; 4 days: +1; 1 day: -1; 1 day: -6.
Now it feels like I’m much more stable, around +2 to +7. Negative days are pretty rare; removing them from my life makes a huge difference to my wellbeing.
I agree this isn’t the typical outcome for someone with depressive symptoms. I was lucky that I would continue to have high “self-efficacy” even when my mood was low, so I was able to put in effort to make my mood better. I’ve also been very lucky in other ways: I’ve been responsive to medication, and my personal and work life have both gone very well.