I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.
About me
I’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.
Who should read this blog?
The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.
This blog might be for you if:
- You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.
- You are interested in learning more about the implications of academic research for views and practices in effective altruism.
- You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.
- You have a moderate amount of time to devote to reading and discussion (20-30mins/post).
- You don't mind reading series of overlapping posts.
This blog might not be for you if:
- You would like to know why someone who has little in common with effective altruists might be worried about the movement.
- You aren’t keen on philosophy, even when empirically grounded.
- You have a short amount of time to devote to reading.
- You like standalone posts and hate series.
Blog series
The blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.
- Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.
- Existential risk pessimism and the time of perils:
- Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).
- Part 2 looks at some failed solutions to the tension.
- Part 3 looks at a better solution: the Time of Perils Hypothesis.
- Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.
- Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.
- Parts 6-8 (coming soon) round out the paper and draw implications.
- Existential risk pessimism and the time of perils:
- Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.
- Part 1 looks at Kieran Setiya’s review, focusing on population ethics.
- Part 2 (coming soon) looks at Richard Chappell’s review.
- Part 3 (coming soon) looks at Regina Rini’s review.
- Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.
- Part 1 introduces the series.
- Part 2 looks at Ord’s discussion of climate risk in The Precipice.
- Part 3 takes a first look at the Halstead report on climate risk.
- Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.
- Billionaire philanthropy: What is the role of billionaire philanthropists within the EA movement and within a democratic society? What should that role be?
- Part 1 introduces the series.
I’ll try to post at least one a week for the next few months. Comment below to tell me what sort of content you would like to see.
A disclaimer: I am writing in my personal capacity
I am writing this blog in my personal capacity. The views expressed in this blog are not the views of the Global Priorities Institute, or of Oxford University. In fact, many of my views diverge strongly from views accepted by some of my colleagues. Although many hands have helped me to shape this blog, the views expressed are mine and mine alone.
FAQ
Q: Is this just a way of making fun of effective altruism?
A: Absolutely not. In writing this blog, I am not trying to ridicule effective altruism, to convince you that effective altruism is worthless, to convince effective altruists to abandon the movement, or to contribute to the destruction of effective altruism.
I take effective altruism seriously. I have been employed for several years by the Global Priorities Institute, a research institute at Oxford University dedicated to foundational academic research on how to do good most effectively. I have organized almost a dozen workshops on global priorities research. I have presented my work at other events within the effective altruism community, including several EAG and EAGx conferences. I have consulted for Open Philanthropy, posted on the EA Forum, and won prizes for my posts.
A view that I share with effective altruists is that it is very important to learn to do good better. I will count myself successful if some of my posts help others to do good better.
Q: Why not just post on the EA Forum? Why is a new blog needed?
A: The EA forum is an important venue for discussions among effective altruists. I’ve posted on the EA Forum in the past, and won prizes for my post.
As an academic, I aim to write for a broad audience. While I certainly hope that EAs will read and engage with my work (that’s why I’m posting here!), I also want to make my work accessible to others who might not usually read the EA Forum.
Q: I’d like to talk to you about X (something I liked; something I didn’t like; a guest post; etc.). How do I do that?
A: Post here or email me at david.thorstad@philosophy.ox.ac.uk. I don’t bite, I promise.
Everything else
Please comment below to let me know what you think and what you’d like to see. If you like the blog, consider subscribing, liking or sharing. If you don’t like the blog, my cat wrote it. If you really hate the blog, it was my neighbor’s cat.
Your blog's name discusses "ineffective altruism" and it intends to criticize Effective Altruism, but your focus appears to be reification of prevailing views within the community with regard to existential risk from climate change. Your entire climate change analysis appears to be summarizing Halstead's report and contrasting it with Ord's work. You judge two EA's against each other, not two EA's against prevailing discussions of climate change dangers outside the community. I would like to read your own analysis of where both Ord and Halstead are wrong, given your research work into climate change, since while anyone in EA can read Ord and Halstead, it appears that EA's have little to go on about the quality of Ord or Halstead's research, except the EA brand and typical use of those authors as sources on climate change risks. Compared to mainstream researchers of climate change, neither author sees climate change as particularly threatening, and that is a contrast that you could draw upon.
On a separate topic, I would like to understand your own views on probabilism and Bayesian updating, if they are in any way different from EA recommendations of how to think about credences or risk.
Given EA offers its own set of epistemic tools, its epistemic recommendations come from a small core of beliefs that EA's promulgate as part of the movement's identity. EA epistemics are nonstandard. To the extent that people adopt those beliefs, they also cohere to the unofficial requirements of being part of the EA research or social community. It would be a welcome counterpoint, and show good faith interest in criticizing the community, for you to take on such core beliefs, and point out their failings as you find them. After all, effectiveness rarely allows for maintaining social pretenses in the name of good epistemics. This would assist the community in evolving its epistemic tools, thereby improving the effectiveness of EA researchers. You could contextualize its existing epistemic tools or suggest new ones using your background in philosophy.
I would like to see more content critical of EA core beliefs available on your blog toward the purpose of helping the EA research community improve its work. Alternatively, I suggest a name change for your blog to remove the ironic reference to ineffective altruism. So long as you defend prevailing EA views on your blog, the name of the blog (ineffective altruism blog) misrepresents your opinion of prevailing EA views, and has unintended irony. An earnest blog title could serve you better.
Oh, I do! :)
On most topics relevant to this forum's readers, that is. For example, I haven't found a good conversation on longevity control, and I'm not sure how appropriate it is to explore here, but I will note, briefly, that once people can choose to extend their lives, there will be a few ways that they can choose to end their lives, only one of which is growing old. Life extension technology poses indirect ethical and social challenges, and widespread use of it might have surprising consequences.