Thanks for all the questions, all - I’m going to wrap up here! Maybe I'll do this again in the future, hopefully others will too!
Hi,
I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I’ll lead by example. (If it goes well, hopefully others will try it out too.)
Below I’ve written out what I’m currently working on. Please ask any questions you like, about anything: I’ll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I’m hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don’t have a Forum account, you can use this Google form.
What I’m up to
Book
My main project is a general-audience book on longtermism. It’s coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I’m currently using is What We Owe The Future.
It’ll hopefully complement Toby Ord’s forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them.
In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare.
Roughly, I’m dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I’ve given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I’m planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book.
My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it’s been submitted, but OUP have been exceptionally slow in processing it. It’s not radically different from my dissertation.
Global Priorities Institute
I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go:
- The case for longtermism, with Hilary Greaves. It’s making the core case for strong longtermism, arguing that it’s entailed by a wide variety of moral and decision-theoretic views.
- The Evidentialist’s Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory.
- A paper, with Tyler John, exploring the political philosophy of age-weighted voting.
I have various other draft papers, but have put them on the back burner for the time being while I work on the book.
Forethought Foundation
Forethought is a sister organisation to GPI, which I take responsibility for: it’s legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years.
Utilitarianism.net
Darius Meissner and I (with help from others, including Aron Vallinder, Pablo Stafforini and James Aung) are creating an introduction to classical utilitarianism at utilitarianism.net. Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity. We aim to put the website online in early October.
Centre for Effective Altruism
We’re down to two very promising candidates in our CEO search; this continues to take up a significant chunk of my time.
80,000 Hours
I meet regularly with Ben and others at 80,000 Hours, but I’m currently considerably less involved with 80k strategy and decision-making than I am with CEA.
Other
I still take on select media, especially podcasts, and select speaking engagements, such as for the Giving Pledge a few months ago.
I’ve been taking more vacation time than I used to (planning six weeks in total this year), and I’ve been dealing on and off with chronic migraines. I’m not sure if the additional vacation time has decreased or increased my overall productivity, but the migraines have decreased it by quite a bit.
I am continuing to try (and often fail) to become more focused in what work projects I take on. My long-run career aim is to straddle the gap between research communities and the wider world, representing the ideas of effective altruism and longtermism. This pushes me in the direction of prioritising research, writing, and select media, and I’ve made progress in that direction, but my time is still more split than I'd like.
I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".
That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI takeoff this century" seems not unreasonable to me. (You could, of course, believe that there is concrete evidence on AGI to justify different credences.)
On a different note, I sometimes find the terminology of "no x-risk", "going well" etc. unhelpful. It seems more useful to me to talk about concrete outcomes and separate this from normative judgments. For instance, I believe that extinction through AI misalignment is very unlikely. However, I'm quite uncertain about whether people in 2019, if you handed them a crystal ball that shows what will happen (regarding AI), would generally think that things are "going well", e.g. because people might disapprove of value drift or influence drift. (The future will plausibly be quite alien to us in many ways.) And finally, in terms to my personal values, the top priority is to avoid risks of astronomical suffering (s-risks), which is another matter altogether. But I wouldn't equate this with things "going well", as that's a normative judgment and I think EA should be as inclusive as possible towards different moral perspectives.