I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.
About me
I’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.
Who should read this blog?
The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.
This blog might be for you if:
- You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.
- You are interested in learning more about the implications of academic research for views and practices in effective altruism.
- You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.
- You have a moderate amount of time to devote to reading and discussion (20-30mins/post).
- You don't mind reading series of overlapping posts.
This blog might not be for you if:
- You would like to know why someone who has little in common with effective altruists might be worried about the movement.
- You aren’t keen on philosophy, even when empirically grounded.
- You have a short amount of time to devote to reading.
- You like standalone posts and hate series.
Blog series
The blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.
- Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.
- Existential risk pessimism and the time of perils:
- Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).
- Part 2 looks at some failed solutions to the tension.
- Part 3 looks at a better solution: the Time of Perils Hypothesis.
- Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.
- Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.
- Parts 6-8 (coming soon) round out the paper and draw implications.
- Existential risk pessimism and the time of perils:
- Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.
- Part 1 looks at Kieran Setiya’s review, focusing on population ethics.
- Part 2 (coming soon) looks at Richard Chappell’s review.
- Part 3 (coming soon) looks at Regina Rini’s review.
- Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.
- Part 1 introduces the series.
- Part 2 looks at Ord’s discussion of climate risk in The Precipice.
- Part 3 takes a first look at the Halstead report on climate risk.
- Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.
- Billionaire philanthropy: What is the role of billionaire philanthropists within the EA movement and within a democratic society? What should that role be?
- Part 1 introduces the series.
I’ll try to post at least one a week for the next few months. Comment below to tell me what sort of content you would like to see.
A disclaimer: I am writing in my personal capacity
I am writing this blog in my personal capacity. The views expressed in this blog are not the views of the Global Priorities Institute, or of Oxford University. In fact, many of my views diverge strongly from views accepted by some of my colleagues. Although many hands have helped me to shape this blog, the views expressed are mine and mine alone.
FAQ
Q: Is this just a way of making fun of effective altruism?
A: Absolutely not. In writing this blog, I am not trying to ridicule effective altruism, to convince you that effective altruism is worthless, to convince effective altruists to abandon the movement, or to contribute to the destruction of effective altruism.
I take effective altruism seriously. I have been employed for several years by the Global Priorities Institute, a research institute at Oxford University dedicated to foundational academic research on how to do good most effectively. I have organized almost a dozen workshops on global priorities research. I have presented my work at other events within the effective altruism community, including several EAG and EAGx conferences. I have consulted for Open Philanthropy, posted on the EA Forum, and won prizes for my posts.
A view that I share with effective altruists is that it is very important to learn to do good better. I will count myself successful if some of my posts help others to do good better.
Q: Why not just post on the EA Forum? Why is a new blog needed?
A: The EA forum is an important venue for discussions among effective altruists. I’ve posted on the EA Forum in the past, and won prizes for my post.
As an academic, I aim to write for a broad audience. While I certainly hope that EAs will read and engage with my work (that’s why I’m posting here!), I also want to make my work accessible to others who might not usually read the EA Forum.
Q: I’d like to talk to you about X (something I liked; something I didn’t like; a guest post; etc.). How do I do that?
A: Post here or email me at david.thorstad@philosophy.ox.ac.uk. I don’t bite, I promise.
Everything else
Please comment below to let me know what you think and what you’d like to see. If you like the blog, consider subscribing, liking or sharing. If you don’t like the blog, my cat wrote it. If you really hate the blog, it was my neighbor’s cat.
There is some controversy about economic estimates of damages from climate destruction in the mainstream. You might find more contrast and differences if you take a look outside EA and economics for information on climate destruction.
You distinguish catastrophic impacts from existential impacts. I'm conflicted about the distinction you draw, but I noted this conflict about Toby Ord's discussion as well, he seems to think a surviving city is sufficient to consider humanity "not extinct". While I agree with you all, I think these distinctions do not motivate many differences in pro-active response, that is, whether a danger is catastrophic, existential, or extinction-level, it's still pretty bad, and recommendations for change or effort to avoid lesser dangers are typically in line with the recommendations to avoid greater dangers. Furthermore, a climate catastrophe does increase the risk of human extinction, considering that climate change worsens progressively over decades, even after all anthropogenic GHG production has stopped. I would to learn more about your thoughts on those differences, particularly how they influence your ethical deliberations about policy changes in the present.
I'm interested in your critical thoughts on:
I've done my best on this forum to distinguish my point of view from EAs wherever it was obvious that I disagreed. I've also followed the works of others here who hold substantially different points of view than the EA majority (for example, about longtermism). If your disagreements are more subtle than mine, or you would disagree with me on most things, I'm not one to suggest topics that you and I agree on. But the general topics can still be addressed even though we disagree. After all, I'm nobody important but the topics are important.
If you do not take an outsider's point of view most of the time, then there's no need to punch things up a bit, but more a need to articulate the nuanced differences you have as well as advocate for the EA point of view wherever you support it. I would still like to read your thoughts from a perspective informed by views outside of EA, as far outside as possible, whether from philosophers that would strongly disagree with EA or from other experts or fields that take a very different point of view than EA's.
I have advocated for an alternative approach to credences, to treat them as binary beliefs or as subject to constraints(nuance) as one gains knowledge that contradicts some of their elements. And an alternative approach to predictions, one of preconditions leading to consequences, and the predictive work involved being one of identifying preconditions with typical consequences. Identification of preconditions in that model involves matching actual contexts to prototypical contexts, with type of match allowing determination of plausible, expected, or optional (action-decided) futures predictable from the match's result. My sources for that model were not typical for the EA community, but I did offer it here.
If you can do similar with knowledge of your own, that would interest me. Any tools that are very different but have utility are interesting to me. Also how you might contextualize current epistemic tools, as I said before, interests me.
Thanks! :)