Ben Kuhn is a data scientist and engineer at a small financial technology firm. He previously studied mathematics and computer science at Harvard, where he was also co-president of Harvard College Effective Altruism. He writes on effective altruism and other topics at his website.
Pablo: How did you become involved in the EA movement?
Ben: When I was a sophomore in high school (that's age 15 for non-Americans), Peter Singer gave his The Life You Can Save talk at my high school. He went through his whole "child drowning in the pond" spiel and explained that we were morally obligated to give money to charities that helped those who were worse off than us. In particular, I think at that point he was recommending donating to Oxfam in a sort of Kantian way where you gave an amount of money such that if everyone gave the same percentage it would eliminate world poverty. My friends and I realized that there was no utilitarian reason to stop at that amount of money--you should just donate everything that you didn't need to survive.
So, being not only sophomores but also sophomoric, we decided that since Prof. Singer didn't live in a cardboard box and wear only burlap sacks, he must be a hypocrite and therefore not worth paying attention to.
Sometime in the intervening two years I ran across Yvain's essay Efficient Charity: Do Unto Others and through it GiveWell. I think that was the point where I started to realize Singer might have been onto something. By my senior year (ages 17-18) I at least professed to believe pretty strongly in some version of effective altruism, although I think I hadn't heard of the term yet. I wrote an essay on the subject in a publication that my writing class put together. It was anonymous (under the brilliant nom de plume of "Jenny Ross") but somehow my classmates all figured out it was me.
The next big update happened during the spring of my first year of Harvard, when I started going to the Cambridge Less Wrong meetups and met Jeff and Julia. Through some chain of events they set me up with the folks who were then running Harvard High-Impact Philanthropy (which later became Harvard Effective Altruism). After that spring, almost everyone else involved in HHIP left and I ended up becoming president. At that point I guess I counted as "involved in the EA movement", although things were still touch-and-go for a while until John Sturm came onto the scene and made HHIP get its act together and actually do things.
Pablo: In spite of being generally sympathetic to EA ideas, you have recently written a thorough critique of effective altruism. I'd like to ask you a few questions about some of the objections you raise in that critical essay. First, you have drawn a distinction between pretending to try and actually trying. Can you tell us what you mean by this, and why do you claim that a lot of effective altruism can be summarized as “pretending to actually try”?
Ben: I'm not sure I can explain better than what I wrote in that post, but I'll try to expand on it. For reference, here's the excerpt that you referred to:
By way of clarification, consider a distinction between two senses of the word “trying”.... Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Most people say they want to improve the world. Some of them say this because they actually want to improve the world, and some of them say this because they want to be perceived as the kind of person who wants to improve the world. Of course, in reality, everyone is motivated by other people's perceptions to some extent--the only question is by how much, and how closely other people are watching. But to simplify things let's divide the world up into those two categories, "altruists" and "signalers."
If you're a signaler, what are you going to do? If you don't try to improve the world at all, people will notice that you're a hypocrite. On the other hand, improving the world takes lots of resources that you'd prefer to spend on other goals if possible. But fortunately, looking like you're improving the world is easier than actually improving the world. Since people usually don't do a lot of due diligence, the kind of improvements that signallers make tend to be ones with very good appearances and surface characteristics--like PlayPumps, water-pumping merry-go-rounds which initially appeared to be a clever and elegant way to solve the problem of water shortage in developing countries. PlayPumps got tons of money and celebrity endorsements, and their creators got lots of social rewards, even though the pumps turned out to be hideously expensive, massively inefficient, prone to breaking down, and basically a disaster in every way.
So in this oversimplified world, the EA observation that "charities vary in effectiveness by orders of magnitude" is explained by "charities" actually being two different things: one group optimizing for looking cool, and one group optimizing for actually doing good. A large part of effective altruism is realizing that signaling-charities ("pretending to try") often don't do very much good compared to altruist-charities.
(In reality, of course, everyone is driven by some amount of signalling and some amount of altruism, so these groups overlap substantially. And there are other motivations for running a charity, like being able to convince yourself that you're doing good. So it gets messier, but I think the vastly oversimplified model above is a good illustration of where my point is coming from.)
Okay, so let's move to the second paragraph of the post you referenced:
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
The observation I'm making here is roughly that EA seems not to have switched entirely to doing good for altruistic rather than signaling reasons. It's more like we've switched to signaling that we're doing good for altruistic rather than signaling reasons. In other words, the motivation didn't switch from "looking good to outsiders" to "actually being good"--it switched from "looking good to outsiders" to "looking good to the EA movement."
Now, the EA movement is way better than random outsiders at distinguishing between things with good surface characteristics and things that are actually helpful, so the latter criterion is much stricter than the former, and probably leads to much more good being done per dollar. (For instance, I doubt the EA community would ever endorse something like PlayPumps.) But, at least at the time of writing that post, I saw a lot of behavior that seemed to be based on finding something pleasant and with good surface appearances rather than finding the thing that optimized utility--for instance, donating to causes without a particularly good case that they were better than saving or picking career options that seemed decent-but-not-great from an EA perspective. That's the source of the phrase "pretending to actually try"--the signaling isn't going away, it's just moving up a level in the hierarchy, to signaling that you don't care about signaling.
Looking back on that piece, I think “pretending to actually try” is still a problem, but my intuition is now that it's probably not huge in the scheme of things. I'm not quite sure why that is, but here are some arguments against it being very bad that have occurred to me:
- It's probably somewhat less prevalent than I initially thought, because the EAs making weird-seeming decisions may be doing them for reasons that aren't transparent to me and that get left out by the typical EA analysis. The typical EA analysis tends to be a 50000-foot average-case argument that can easily be invalidated by particular personal factors.
- As Katja Grace points out, encouraging pretending to really try might be optimal from a movement-building perspective, inasmuch as it's somewhat inescapable and still leads to pretty good results.
- I probably overestimated the extent to which motivated/socially-pressured life choices are bad, for a couple reasons. I discounted the benefit of having people do a diversity of things, even if the way they came to be doing those things wasn't purely rational. I also discounted the cost of doing something EA tells you to do instead of something you also want to do.
- For instance, suppose for the sake of argument that there's a pretty strong EA case that politics isn't very good (I know this isn't actually true). It's probably good for marginal EAs to be dissuaded from going into politics by this, but I think it would still be bad for every single EA to be dissuaded from going into politics, for two reasons. First, the arguments against politics might turn out to be wrong, and having a few people in politics hedges against that case. Second, it's much easier to excel at something you're motivated at, and the category of "people who are excellent at what they do" is probably as important to the EA movement as "people doing job X" for most X.
Pablo: In another section of that critique, you express surprise at the fact that so many effective altruists donate to global health causes now. Why would you expect EAs to use their money in other ways--whether it's donating now to other causes, or donating later--, and what explains, in your opinion, this focus on causes for which we have relatively good data?
Ben; I'm no longer sure enough of where people's donations are going to say with certainty that too much is going to global health. My update here is from of a combination of being overconfident when I wrote the piece, and what looks like an increase in waiting to donate shortly after I wrote it. The latter was probably due in large part to AMF's delisting and perhaps the precedent set by GiveWell employees, many of whom waited last year (though others argued against it). (Incidentally, I'm excited about the projects going on to make this more transparent, e.g. the questions on the survey about giving!)
The giving now vs. later debate has been ably summarized by Julia Wise on the EA blog. My sense from reading various arguments for both sides is that I more often see bad arguments for giving now. There are definitely good arguments for giving at least some money now, but on balance I suspect I’d like to see more saving. Again, though, I don’t have a great idea of what people’s donation behavior actually is; my samples could easily be biased.
I think my strongest impression right now is that I suspect we should be exploring more different ways to use our donations. For instance, some people who are earning to give have experimented with funding people to do independent research, which was a pretty cool idea. Off the top of my head, some other things we could try include scholarships, essay contest prizes, career assistance for other EAs, etc. In general it seems like there are tons of ways to use money to improve the world, many of which haven’t been explored by GiveWell or other evaluators and many of which don’t even fall in the category of things they care about (because they’re too small or too early-stage or something), but we should still be able to do something about them.
Pablo: In the concluding section of your essay, you propose that self-awareness be added to the list of principles that define effective altruism. Any thoughts on how to make the EA movement more self-aware?
Ben: One thing that I like to do is think about what our blind spots are. I think it's pretty easy to look at all the stuff that is obviously a bad idea from an EA point of view, and think that our main problem is getting people "on board" (or even "getting people to admit they're wrong") so that they stop doing obviously bad ideas. And that's certainly helpful, but we also have a ways to go just in terms of figuring things out.
For instance, here's my current list of blind spots--areas where I wish there were a lot more thinking and idea-spreading going on then there currently is:
- Being a good community. The EA community is already having occasional growing pains, and this is only going to get worse as we gain steam e.g. with Will MacAskill's upcoming book. And beyond that, I think that ways of making groups more effective (as opposed to individuals) have a lot of promise for making the movement better at what we do. Many, many intellectual groups fail to accomplish their goals for basically silly reasons, while seemingly much worse groups do much better on this dimension. It seems like there’s no intrinsic reason we should be worse than, say, Mormons at building an effective community, but we’re clearly not there yet. I think there’s absolutely huge value in getting better at this, yet almost no one putting in a serious concerted effort.
- Knowing history. Probably as a result of EA's roots in math/philosophy, my impression is that our average level of historical informedness is pretty low, and that this makes us miss some important pattern-matches and cues. For instance, I think a better knowledge of history could help us think about capacity-building interventions, policy advocacy, and community building.
- Fostering more intellectual diversity. Again because of the math/philosophy/utilitarianism thing, we have a massive problem with intellectual monoculture. Of my friends, the ones I enjoy talking about altruism the most with now are largely actually the ones who associate least with the broader EA community, because they have more interesting and novel perspectives.
- Finding individual effective opportunities. I suspect that there’s a lot of room for good EA opportunities that GiveWell hasn’t picked up on because they’re specific to a few people at a particular time. Some interesting stuff has been done in this vein in the past, like funding small EA-related experiments, funding people to do independent secondary research, or giving loans to other EAs investing in themselves (at least I believe this has been done). But I’m not sure if most people are adequately on the lookout for this kind of opportunity.
Pablo: Finally, what are your plans for the mid-term future? What EA-relevant activities will you engage in over the next few years, and what sort of impact do you expect to have?
Ben: A while ago I did some reflecting and realized that most of the things I did that I was most happy about were pretty much unplanned--they happened not because I carefully thought things through and decided that they were the best way to achieve some goal, but because they intuitively seemed like a cool thing to do. (Things in this category include starting a blog, getting involved in the EA/rationality communities, running Harvard Effective Altruism, getting my current job, etc.) As a result, I don't really have "plans for the mid-term future" per se. Instead, I typically make decisions based on intuitions/heuristics about what will lead to the best opportunities later on, without precisely knowing (or even knowing at all, often) what form those opportunities will take.
So I can't tell you what I'll be doing for the next few years--only that it will probably follow some of my general intuitions and heuristics:
- Do lots of things. The more things I do, the more I increase my "luck surface area" to find awesome opportunities.
- Do a few things really well. The point of this heuristic is hopefully obvious.
- Do things that other people aren't doing--or more accurately, things that not enough people are doing relative to how useful or important they are. My effort is most likely to make a difference in an area that is relatively under-resourced.
Anyway, that's my long-winded answer to the first part of this question. As far as EA-relevant activities and impacts, all the same caveats apply as above, but I can at least go over some things I'm currently interested in:
- Now that I’m employed full-time, I need to start thinking much harder about where exactly I want to give: both what causes seem best, and which interventions within those causes. I actually currently don't have much of a view on what I would do with more unrestricted funds.
- Related to the point above about self-awareness, I'm interested in learning some more EA-relevant history--how previous social movements have worked out, how well various capacity-building interventions have worked, more about policy and the various systems that philanthropy comes into contact with, etc.
- I'm interested to see to what extent the success of Harvard Effective Altruism can be sustained at Harvard and replicated at other universities.
- I think there may be under-investment in healthy EA community dynamics, preventing common failure modes like unfriendliness, ossification to new ideas, groupthink etc.--though I can't say for sure because I don't have a great big-picture perspective of the EA community.
- I'm also interested in generally adding more intellectual/epistemic diversity to EA--we have something of a monoculture problem right now. Anecdotally, there are a number of people who I think would have a really awesome perspective on many problems that we face, but who get turned off of the community for one reason or another.