I’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).
There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.
About this blog
The blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.
One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.
I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.
About this document
The blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.
Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.
Series 1: Academic papers
The purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.
Sub-series A: Existential risk pessimism and the time of perils
This series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.
Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.
Sub-series B: The good it promises
This series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.
Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.
Series 2: Academics review WWOTF
Will MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future.
Status: Temporarily paused. Part 1 looks at Kieran Setiya’s review, focusing on population ethics. Part 2 looks at Richard Chappell’s review, focusing on the value of future people, longtermism and total utilitarianism, and existential risk. Part 3 looks at Regina Rini's review, focusing on demandingness, cluelessness, and inscrutability.
The gold standard for an academic book review is a review published in a leading scholarly journal. I will resume this series as journals begin to publish reviews of What we owe the future, provided the reviews are of sufficiently good quality.
Series 3: Billionaire philanthropy
Effective altruism relies increasingly on a few billionaire donors to sustain its operations. What is the role of billionaire philanthropists within effective altruism and within society? What should that role be? In this series, I ask what drives billionaire philanthropists, how they are taxed and regulated, and what sorts of influence they should be allowed to wield within a democratic society.
Status: Ongoing. Part 1 introduces the series. Part 2 looks at the uneasy relationship between billionaire philanthropy and democracy. Part 3 examines the project of patient philanthropy, which seeks to create billionaire foundations by holding money over a long period of time. Part 4 brings the focus back to philanthropists themselves, focusing on their motivations for giving. Part 5 discusses the sources of wealth donated to effective altruist organizations, focusing on moral constraints that may arise from the nature of these sources.
Two more posts have been drafted, focusing on wasteful spending (tentatively, Part 6) and donor discretion (tentatively, Part 7).
Series 4: Belonging
Who is or can be an effective altruist? Whose voices will be heard? Who feels that they belong within effective altruism, and who feels marginalized, uncomfortable, or mistreated? This series discusses questions of inclusion and belonging within and around the effective altruism movement, with a focus on identifying avenues for positive and lasting change.
Status: Ongoing. I hope I won't have to write too many more of these. Parts 1-3 discussed the Bostrom email (Part 1), the community’s reaction to that email (Part 2), and the prospects for reform (Part 3). Part 4 discussed the TIME Magazine article on sexual misconduct within effective altruism.
Series 5: Epistemics
Effective altruists use the term `epistemics’ to describe practices that shape knowledge, belief and opinion within a community. This series focuses on areas in which community epistemics could be productively improved.
Status: Ongoing (early stages). Part 1 introduces the series, briefly discussing the influence of money on the epistemic direction of the field; the nature and importance of publication practices within the movement; and the proper role of deference and authority.
Two more posts have been drafted, focusing on the use of examples within arguments (tentatively, Part 2) and the role of peer review (tentatively, Part 3). I hope to write many more.
Series 6: Exaggerating the risks
Effective altruists give alarmingly high estimates of the levels of existential risk facing humanity today. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.
Status: Ongoing. Part 1 introduces the series. Parts 2-5 focus on climate risk. I argue that Toby Ord's estimate of climate risk is undersupported (Part 2), then use the Halstead Report on climate change and longtermism to argue that there is no credible scientific basis for large estimates of near-term climate risk. Part 3 looks at crop failure and heat stress. Part 4 looks at sea level rise, tipping points, and paleoclimate data. Part 5 looks at moist and runaway greenhouse effects, then draws lessons from this discussion.
Parts 6-8 focus on AI risk. I review the Carlsmith report on power-seeking AI. Part 6 introduces the report and makes some general remarks on AI risk. Part 7 discusses instrumental convergence. Part 8 will wrap up the discussion of the Carlsmith report.
In the future, I plan to discuss other risks, including biorisk. I also plan to reflect on old risk arguments which have not aged well, including early attempts to ground the singularity hypothesis in the idea of self-replicating nanotechnology. I will probably return at some point to AI risk, though I do not want to give as much focus to AI risk as it has assumed in some recent discussions by effective altruists.
I am not entirely sure which series I will add in the future. I am interested to hear what series you might be keen on reading. Here are some of my tentative plans right now.
Academic papers: New sub-series
My original intention in writing this blog was to focus primarily on academic papers and books, because my comparative advantage lies in discussing these. I think that I may have drifted too far away from discussing these in recent months.
In the future, I hope to start new sub-series on three of my own papers. The first, "Against the singularity hypothesis", argues that the singularity hypothesis is less likely than many take it to be.
The second, "The scope of longtermism" argues that longtermism may correctly describe fewer decision problems than some longtermists suppose.
The third, "Mistakes in the moral mathematics of existential risk" (draft coming soon) is a follow-up paper to my "Existential risk pessimism and the time of perils". Part 7 of my series on that paper noted that one way to read that paper is as exposing a mistake we often make in calculating the value of existential risk mitigation (ignoring background risk). "Mistakes in the moral mathematics of existential risk" extends the discussion of ignoring background risk, and considers two additional mistakes of this type.
I also want to discuss papers by other authors. My tentative plan is to write a series entitled "Papers I have learned from". For example, I hope to discuss Harry Lloyd's "Time discounting, consistency, and special obligations: A defense of robust temporalism" which argues for the moral permissibility of a nonzero rate of pure time preference; Richard Pettigrew's "Effective altruism, risk, and human extinction," which looks at the impact of risk aversion on the value of efforts to reduce extinction risk; and Emma Curran's "Longtermism, aggregation, and catastrophic risk".
Sometime next year, the Global Priorities Institute will release an open-access volume of essays on longtermism (with Oxford University Press), with entries from world-leading scholars. In my completely unbiased opinion as an editor of that volume, many of the papers are excellent. I will almost certainly write a series discussing them.
Beyond the blog
While my main focus is, and always will be, on the production of academic papers, I have started doing some other forms of non-academic speaking and writing beyond the blog. Most recently, I recorded an episode of the Critiques of EA podcast with Nick Anyos.
I'm going to take this slowly. I am not trained to do public philosophy, and I am a bit nervous about getting it wrong. But as I build up a larger volume of work beyond the blog, I will probably start a series discussing some of the more substantial pieces. I will also try to be honest about my mistakes, of which I am sure there will be plenty. It is a learning experience for me.
There are a number of familiar challenges to longtermism and effective altruism, including the Cluelessness Problem, anti-fanaticism, objections from population ethics, and what I have called a regression to the inscrutable in theorizing about existential risk.
Some of these challenges are already quite well discussed, whereas others have no good orthodox statement that is accessible to a broad range of readers. I am thinking about writing a series that would devote a post to explaining and assessing each challenge.
I didn't do much work to sound-test the original title of my blog, Ineffective Altruism. It turned out that this name hit much harder than I had intended, and gave readers the wrong idea about the content and intellectual seriousness of the blog. It also turned out that the name overlapped with some previous online discourse that I was unaware of, and which I did not want to be associated with.
Drawing on user feedback, I changed the name of my blog to Reflective Altruism. The name communicates my intention to draw on academic research to reflect on questions and challenges raised by the effective altruism movement.
Several readers told me that they find it easier to consume long posts in audio form. Recent advances in text-to-speech technology made it possible for me to provide audio versions of all of my posts, and I began doing so about two months ago.
Going forward, all new posts will be available in audio format (except perhaps for a very few technical posts, which are not handled well by existing text-to-speech technology). I have also started adding audio versions of old posts. There is some amount of time and money that must be invested in the process. I hope that new advances in text-to-speech technology will lead to higher-quality, cheaper and faster audio production in the near future.
Social media matters
When I began my blog, I thought that I could just write content and expect people to read it. It turns out that you have to tell people about your blog if you want them to read it.
One way to do that is through blog updates, such as this one and my initial post to the EA Forum. Another effective technique is word of mouth: if you like my blog, I hope you will consider mentioning it to potentially interested readers.
Increasingly, it is also important to be on social media. My Twitter account (@ReflectiveAlt) allows me to comment on developments in real-time and also serves as an important way of updating readers about developments on the blog. I also have a Facebook page that is so embarrassingly dead I will not even link to it. Lesson learned.
Hard content is okay
I was initially concerned that the content on the blog would be too demanding. My posts are long, and while I do what I can to write accessibly, I do not skimp on or tone down difficult applications of academic research in philosophy and the social and natural sciences.
It was a pleasant surprise to see that for the most part, this feature of the blog did not seem to put readers off. Indeed, if anything the "lighter" blog series, created partly to fill the gap between more difficult posts, started receiving fewer hits. This made me happy, and it is to the credit of my readers that they do not seem to be averse to reading through difficult and time-consuming material.
In the future, I plan to worry less about the demandingness of material and to focus more on producing high-quality content, without regard to difficulty.
Longer, less-frequent posts work
My initial plan was to post 2-3 shorter posts every week. I found that most readers were not interested in reading such frequent posts, and also preferred to have a larger chunk of material to engage with, rather than being drip-fed short pieces of a long argument over many posts.
I've now switched to a weekly posting schedule, with many posts in the 3-5k word range, or about 20-25 minutes of audio. I plan to keep to this schedule for as long as I can. If I become very busy, I may have to switch to an every-second-week posting schedule. I will try not to do that, but I am told that the first year on the tenure track is quite brutal.
I am not writing this blog for myself. I write for my readers. Many of my readers are on this forum. In fact, most of my readers are effective altruists, often quite engaged ones.
I value your opinions, suggestions, and feedback. I would like to hear what you think of the blog, what you think is going well, what may be going less well, and what you would like to see in the future.
If you are comfortable, please comment below to tell me what you think. If you would rather reach out privately, you can always reach me at email@example.com.
I don't have a cat.
Hi David, the blog sounds cool. I don't follow any personal blogs, but check the EA Forum regularly.
You comment that "most of my readers are effective altruists, often quite engaged ones."
I expect that there are others like me.
Why not just cross-post everything on your blog to the Forum?
Thanks for the kind words, Jamie!
I always appreciate engagement with the blog and I'm happy when people want to discuss my work on the EA Forum, including cross-posting anything they might find interesting. I also do my best to engage as I can on the EA Forum: I posted this blog update after several EA Forum readers suggested I do it.
I'm hesitant to outright post my blog posts as EA Forum posts. Although this is in many senses a blog about effective altruism, I'm not an effective altruist, and I need to keep enough distance in terms of the readership I need to answer to, as well as how I'm perceived.
I wouldn't complain if you wanted to cross-post any posts that you liked. This has happened before and I was glad to see it!
It can be pretty hard to stick with a movement when you see a lot of flaws in it. I'm always impressed by your patience and perseverance, and the high quality critique, keep it up!
For what it’s worth I haven’t gotten around to reading a ton of your posts yet, but me and pretty much everyone I showed your blog to could tell pretty quickly that it was a cut above whatever I might picture just from the title. That said, I think all the changes are good ideas on the whole. Keep up the good work!
Thanks Devin! Let me know what you think
I deeply enjoy your blog. I often grow frustrated with critiques of Effective Altruism for what I perceive as lacking rigor, charitability, and offered alternatives. This is very different from your blog. I think your blog hits a great balance in that I feel like you genuinely engage with the ideas from a well-intended perspective, yet do not hesitate to be critical and cutting when you feel like an issue is not well justified in EA discourse.
I particularly enjoyed the AI risk series and Exaggerating the risks series. I take this to be the areas where, if EA erred, it would be most impactful to spot it early and react, given the amount of funding and talent going into risk mitigation. I would love to read more content on regression to the inscrutable, which I found very insightful. I would also love to read more of your engagement with AI papers and articles.
I'd be interested in whether you or others have favorite critiques of EA that aim for a similar kind of engagement.
Thanks mhendric! I appreciate the kind words.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, ...) there's much more glory in writing a paper about X than a paper about X's implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That's frustrating, because EAs genuinely want to know what X implies for EA, but don't have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you're willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI's forthcoming longtermism volume will have many others.
In the meantime ... I share your frustration. It's just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it's very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I'm not entirely sure how we will bridge this gap, but I hope we do.
I'll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Not that my vote counts for a lot, but I think it would be worthwhile for EA-aligned or alignedish sources to fund distillation / simplication of thoughtful criticism that is currently in a too-technical or other form that is hard to access for many people who would benefit from reading it. That seems like pretty low-hanging fruit, and making extant work more accessible doesn't really implicate some of the potential concerns and challenges of commissioning new criticism. I wasn't immediately able to find the papers you referenced on mobile, but my vague recollection of other GPI working papers is that accessibility could be a challenge for a bright but very generalist reader.
To operationalize: I don't have the money to fairly fund a paper's author -- or an advanced grad student -- distilling a technical paper into something at the level of several blog posts like the ones on your blog. But it's the kind of thing I'd personally be willing to fund if there were enough people willing to share in the cost (definition of "enough" is dependent on financial specifics).
Thank you for the recommendations. To be honest, the parts of The good it promises that I read struck me as very low quality and significantly worse than the average EA critique. The authors did not seem to me to engage in good-faith critique, and I found a fair amount of their claims and proposed alternatives outlandish and unconvincing. I also found many of the arguments to be relying on buzzwords rather than actual arguments, which made the book feel a bit like a vicious twitter thread. I read only about half of the book; maybe I focused on the wrong parts.
I will check the GPI working paper series for alternative critiques. Thank you for recommending them.
Two AI papers I'd be particularly interested to see you engage with are
"Concrete Problems in AI Safety"
"The alignment problem from a deep learning perspective"
On another note, I recently heard an interesting good-faith critique of EA called "But is it altruism?" by Peruzzi&Calderon. It is not published yet, but when it comes out, I could send it to you - it may be an interesting critique to dissect on the blog.
Again, thanks for your work on this blog. It's really appreciated, and it is impressive you are able to spend so much time thoughtfully reflecting on EA on this blog while being a full-time academic.
Thanks mhenric! Those are both good papers to consider and I'll do my best to address them.
I didn't know the "But is it altruism" paper. Please do send it when it is out - I'd like to read it any hopefully write about it.
Thanks for the update, David!
I thought the series on Exaggerating the risks quite interesting. In particular, it helped me internalise the preliminary lessons of this post:
I think there is a strong tendency towards giving values between 1 % and 90 % for existential risk until 2100 if one knows very little about the risk, but super slim evidential basis is also compatible with values many OOMs below 1 %.
Update: I have now gone through the 1st 8 posts of the series Existential risk pessimism, and found it pretty valuable too. As someone who puts really large weight on expectational total hedonistic utilitarianism, I am not persuaded by common objections to longtermism such as "maybe creating lives is neutral" or "you cannot use expected value when the probabilities are super small". These are the ones I typically found on the EA Forum or EA-aligned podcasts, but the series shows that:
Thanks Vasco! I appreciate your readership, and you've got my view exactly right here. Even a 1% chance of literal extinction in this century should be life-alteringly frightening on many moral views (including mine!). Pushing the risk a fair bit lower than that should be a part of most plausible strategies for resisting the focus on existential risk mitigation.
Always appreciate your blog, even (maybe especially!) when I don't agree. It's a model of the kind of outside analysis and criticism I think is helpful to carefully weigh when trying to make EA a better version of itself.
And I am really happy to hear you landed a faculty post at a school like Vandy (assuming that's what you were looking for, of course).
Thanks Jason! And yes, I'm a southern boy. Vandy is just what I was looking for. I appreciate the kind words and your continued readership.
Not a content comment, but just a thank you for writing this intro to your blog and summary. I hadn't heard of the blog yet, but am super interested in reading it!
Thanks Milena! Let me know what you think.
Are there by any chance plans to collect the audio in a podcast feed?
Interesting! I think this should be manageable. Would people listen to this?
I would :)