I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.
(EDIT: As per the comments of Vaniver, Carl Shulman, and others on Less Wrong, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)
How to read this post
(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)
Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.
Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)
(End less relevant paragraphs.)
Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.
Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Below I introduce various ways in which effective altruists have failed to go beyond the social-satisficing algorithm of establishing some credibly acceptable alternatives and then picking among them based on essentially random preferences. I exhibit other areas where the norms of effective altruism fail to guard against motivated cognition. Both of these phenomena add what I call “epistemic inertia” to the effective-altruist consensus: effective altruists become more subject to pressures on their beliefs other than those from a truth-seeking process, meaning that the EA consensus becomes less able to update on new evidence or arguments and preventing the movement from moving forward. I argue that this stems from effective altruists’ reluctance to think through issues of the form “being a successful social movement” rather than “correctly applying utilitarianism individually”. This could potentially be solved by introducing an additional principle of effective altruism—e.g. “group self-awareness”—but it may be too late to add new things to effective altruism’s DNA.
Philosophical difficulties
There is currently wide disagreement among effective altruists on the correct framework for population ethics. This is crucially important for determining the best way to improve the world: different population ethics can lead to drastically different choices (or at least so we would expect a priori), and if the EA movement can’t converge on at least their instrumental goals, it will quickly fragment and lose its power. Yet there has been little progress towards discovering the correct population ethics (or, from a moral anti-realist standpoint, constructing arguments that will lead to convergence on a particular population ethics), or even determining which ethics lead to which interventions being better.
Poor cause choices
Many effective altruists donate to GiveWell’s top charities. All three of these charities work in global health. Is that because GiveWell knows that global health is the highest-leverage cause? No. It’s because it was the only one with enough data to say anything very useful about. There’s little reason to suppose that this correlates with being particularly high-leverage—on the contrary, heuristic but less rigorous arguments for causes like existential risk prevention, vegetarian advocacy and open borders suggest that these could be even more efficient.
Furthermore, the our current “best known intervention” is likely to change (in a more cost-effective direction) in the future. There are two competing effects here: we might discover better interventions to donate to than the ones we currently think are best, but we also might run out of opportunities for the current best known intervention, and have to switch to the second. So far we seem to be in a regime where the first effect dominates, and there’s no evidence that we’ll reach a tipping point very soon, especially given how new the field of effective charity research is.
Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides. And anyway, donating when you believe it’s not (except for example-setting) the best possible course of action, in order to make a point about figuring out the best possible course of action and then doing that thing, seems perverse.
Non-obviousness
Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.
Efficient markets for giving
It’s often claimed that “nonprofits are not a market for doing good; they’re a market for warm fuzzies”. This is used as justification for why it’s possible to do immense amounts of good by donating. However, while it’s certainly true that most donors aren’t explicitly trying to purchase utililty, there’s still a lot of money that is.
The Gates Foundation is an example of such an organization. They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses.
So why not just donate to the Gates Foundation? Effective altruists need a better account of the “market inefficiencies” that they’re exploiting that Gates isn’t. Why didn’t the Gates Foundation fund the Against Malaria Foundation, GiveWell’s top charity, when it’s in one of their main research areas? It seems implausible that the answer is simple incompetence or the like.
A general rule of markets is that if you don’t know what your edge is, you’re the sucker. Many effective altruists, when asked what their edge is, give some answer along the lines of “actually being strategic/thinking about utility/caring about results”, and stop thinking there. This isn’t a compelling case: as mentioned before, it’s not clear why no one else is doing these things.
Inconsistent attitude towards rigor
Effective altruists insist on extraordinary rigor in their charity recommendations—cf. for instance GiveWell’s work. Yet for many ancillary problems—donating now vs. later, choosing a career, and deciding how “meta” to go (between direct work, earning to give, doing advocacy, and donating to advocacy), to name a few—they seem happy to choose between the not-obviously-wrong alternatives based on intuition and gut feelings.
Poor psychological understanding
John Sturm suggests, and I agree, that many of these issues are psychological in nature:
I think a lot of these problems take root a commitment level issue:
I, for instance, am thrilled about changing my mentality towards charity, not my mentality towards having kids. My first guess is that - from an EA and overall ethical perspective - it would be a big mistake for me to have kids (even after taking into account the normal EA excuses about doing things for myself). At least right now, though, I just don’t care that I’m ignoring my ethics and EA; I want to have kids and that’s that.
This is a case in which I’m not “being lazy” so much as just not trying at all. But when someone asks me about it, it’s easier for me to give some EA excuse (like that having kids will make me happier and more productive) that I don’t think is true - and then I look like I’m being a lazy or careless altruist rather than not being one at all.
The model I’m building is this: there are many different areas in life where I could apply EA. In some of them, I’m wholeheartedly willing. In some of them, I’m not willing at all. Then there are two kinds of areas where it looks like I’m being a lazy EA: those where I’m willing and want to be a better EA… and those where I’m not willing but I’m just pretending (to myself or others or both).
The point of this: when we ask someone to be a less lazy EA, we are (1) helping them do a better job at something they want to do, and (2) trying to make them either do more than they want to or admit they are “bad”.
In general, most effective altruists respond to deep conflicts between effective altruism and other goals in one of the following ways:
- Unconsciously resolve the cognitive dissonance with motivated reasoning: “it’s clearly my comparative advantage to spread effective altruism through poetry!”
- Deliberately and knowingly use motivated reasoning: “dear Facebook group, what are the best utilitarian arguments in favor of becoming an EA poet?”
- Take the easiest “honest” way out: “I wouldn’t be psychologically able to do effective altruism if it forced me to go into finance instead of writing poetry, so I’ll become an effective altruist poet instead”.
The third is debatably defensible—though, for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work.
Furthermore, EA norms do not proscribe even the first two, leading to a group norm that doesn’t cause people to notice when they’re engaging in a certain amount of motivated cognition. This is quite toxic to the movement’s ability to converge on the truth. (As before, effective altruists are still better than the general population at this; the core EA principles are strong enough to make people notice the most obvious motivated cognition that obviously runs afoul of them. But that’s not nearly good enough.)
Historical analogues
With the partial exception of GiveWell’s history of philanthropy project, there’s been no research into good historical outside views. Although there are no direct precursors of effective altruism (worrying in its own right; see above), there is one notably similar movement: communism, where the idea of “from each according to his ability, to each according to his needs” originated. Communism is also notable for its various abject failures. Effective altruists need to be more worried about how they will avoid failures of a similar class—and in general they need to be more aware of the pitfalls, as well as the benefits, of being an increasingly large social movement.
Aaron Tucker elaborates better than I could:
In particular, Communism/Socialism was a movement that was started by philosophers, then continued by technocrats, where they thought reason and planning could make the world much better, and that if they coordinated to take action to fix everything, they could eliminate poverty, disease, etc.
Marx totally got the “actually trying vs. pretending to try” distinction AFAICT (“Philosophers have only explained the world, but the real problem is to change it” is a quote of his), and he really strongly rails against people who unreflectively try to fix things in ways that make sense to the culture they’re starting from—the problem isn’t that the bourgeoisie aren’t trying to help people, it’s that the only conception of help that the bourgeoisie have is one that’s mostly epiphenomenal to actually improving the lives of the proletariat—giving them nice boureoisie things like education and voting rights, but not doing anything to improve the material condition of their life, or fix the problems of why they don’t have those in the first place, and don’t just make them themselves.
So if Marx got the pretend/actually try distinction, and his followers took over countries, and they had a ton of awesome technocrats, it seems like it’s the perfect EA thing, and it totally didn’t work.
Monoculture
Effective altruists are not very diverse. The vast majority are white, “upper-middle-class”, intellectually and philosophically inclined, from a developed country, etc. (and I think it skews significantly male as well, though I’m less sure of this). And as much as the multiple-perspectives argument for diversity is hackneyed by this point, it seems quite germane, especially when considering e.g. global health interventions, whose beneficiaries are culturally very foreign to us.
Effective altruists are not very humanistically aware either. EA came out of analytic philosophy and spread from there to math and computer science. As such, they are too hasty to dismiss many arguments as moral-relativist postmodernist fluff, e.g. that effective altruists are promoting cultural imperialism by forcing a Westernized conception of “the good” onto people they’re trying to help. Even if EAs are quite confident that the utilitarian/reductionist/rationalist worldview is correct, the outside view is that really engaging with a greater diversity of opinions is very helpful.
Community problems
The discourse around effective altruism in e.g. the Facebook group used to be of fairly high quality. But as the movement grows, the traditional venues of discussion are getting inundated with new people who haven’t absorbed the norms of discussion or standards of proof yet. If this is not rectified quickly, the EA community will cease to be useful at all: there will be no venue in which a group truth-seeking process can operate. Yet nobody seems to be aware of the magnitude of this problem. There have been some half-hearted attempts to fix it, but nothing much has come of them.
Movement building issues
The whole point of having an effective altruism “movement” is that it’ll be bigger than the sum of its parts. Being organized as a movement should turn effective altruism into the kind of large, semi-monolithic actor that can actually get big stuff done, not just make marginal contributions.
But in practice, large movements and truth-seeking hardly ever go together. As movements grow, they get more “epistemic inertia”: it becomes much harder for them to update on evidence. This is because they have to rely on social methods to propagate their memes rather than truth-seeking behavior. But people who have been drawn to EA by social pressure rather than truth-seeking take much longer to change their beliefs, so once the movement reaches a critical mass of them, it will become difficult for it to update on new evidence. As described above, this is already happening to effective altruism with the ever-less-useful Facebook group.
Conclusion
I’ve presented several areas in which the effective altruism movement fails to converge on truth through a combination of the following effects:
- Effective altruists “stop thinking” too early and satisfice for “doesn’t obviously conflict with EA principles” rather than optimizing for “increases utility”. (For instance, they choose donations poorly due to this effect.)
- Effective altruism puts strong demands on its practitioners, and EA group norms do not appropriately guard against motivated cognition to avoid them. (For example, this often causes people to choose bad careers.)
- Effective altruists don’t notice important areas to look into, specifically issues related to “being a successful movement” rather than “correctly implementing utilitarianism”. (For instance, they ignore issues around group epistemology, historical precedents for the movement, movement diversity, etc.)
These problems are worrying on their own, but the lack of awareness of them is the real problem. The monoculture is worrying, but the lackadaisical attitude towards it is worse. The lack of rigor is unfortunate, but the fact that people haven’t noticed it is the real problem.
Either effective altruists don’t yet realize that they’re subject to the failure modes of any large movement, or they don’t feel motivation to do the boring legwork of e.g. engaging with viewpoints that your inside view says are annoying but that the outside view says are useful on expectation. Either way, this bespeaks worrying things about the movement’s staying power.
More importantly, it also indicates an epistemic failure on the part of effective altruists. The fact that no one else within EA has done a substantial critique yet is a huge red flag. If effective altruists aren’t aware of strong critiques of the EA movement, why aren’t they looking for them? This suggests that, contrary to the emphasis on rationality within the movement, many effective altruists’ beliefs are based on social, rather than truth-seeking, behavior.
If it doesn’t solve these problems, effective-altruism-the-movement won’t help me achieve any more good than I could individually. All it will do is add epistemic inertia, as it takes more effort to shift the EA consensus than to update my individual beliefs.
Are these problems solvable?
It seems to me that the third issue above (lack of self-awareness as a social movement) subsumes the other two: if effective altruism as a movement were sufficiently introspective, it could probably notice and solve the other two problems, as well as future ones that will undoubtedly crop up.
Hence, I propose an additional principle of effective altruism. In addition to being altruistic, maximizing, egalitarian, and consequentialist we should be self-aware: we should think carefully about the issues associated with being a successful movement, in order to make sure that we can move beyond the obvious applications of EA principles and come up with non-trivially better ways to improve the world.
Acknowledgments
Thanks to Nick Bostrom for coining the idea of a hypothetical apostasy, and to Will Eden for mentioning it recently.
Thanks to Michael Vassar, Aaron Tucker and Andrew Rettek for inspiring various of these points.
Thanks to Aaron Tucker and John Sturm for reading an advance draft of this post and giving valuable feedback.
IIRC a lot of people liked this post at the time, but I don't think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:
I'm glad I wrote this because it played a part in inspiring Jacob to write up his better version, and I think it was a useful exercise for me and an interesting historical artifact from the early days of EA, but I don't think the ideas in it ultimately mattered that much.
"They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses."
What I've so far read in this essay is very good, however I'd note the foundation has spent almost 30 billion, a large fraction of it on vaccines (I can't find how much with a simple search). The numbers suggest the cost per life saved is in the 1-2k range, or at least the high three digits. Which is in the same range as the AMF estimates.
In 2010 they announced that they would "more than double" their vaccine spending to 10 billion total, by 2020 (e.g. http://www.nytimes.com/2010/01/30/health/30gates.html). That puts it in the mid-to-high three digits range, which is about three times better than AMF. I wouldn't call that "the same range" as the AMF estimates, especially since it's no longer so clear that those estimates even apply to marginal dollars given to AMF.
My commment, cross posted in Lesswrong Links are missing here: : http://lesswrong.com/r/discussion/lw/j8v/in_praise_of_tribes_that_pretend_to_try/
Disclaimer: I endorse the EA movement and direct an EA/Transhumanist organization, www.IERFH.org
We finally have created the first "inside view" critique of EA.
The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.
Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.
Original Version Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Counterargument: Tribes have internal structure, and so should the EA movement.
This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.
Feeling-oriented, and outcome-oriented communities
People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".
A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.
An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.
What are communities good for? What is good for communities?
The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)
As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".
A healthy double layered movement
Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.
This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.
Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.
Intentional Agents, communities or individuals, are not monolithic
Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.
The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.
Rationalists already accepted a layered structure
We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.
For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.
The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.
Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.
The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.
Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.
Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.
Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.
The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness
The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.
Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.
But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?
Here is an obvious place not to do it: Open groups on Facebook.
Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.
Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.
Conclusions
I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.
The first three initial measures I suggest for this re-design of the community are:
1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.
2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.
3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.
This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.
Cross posted on Lesswrong discusssion (links are missing here): http://lesswrong.com/r/discussion/lw/j8v/in_praise_of_tribes_that_pretend_to_try/
Disclaimer: I endorse the EA movement and direct an EA/Transhumanist organization, www.IERFH.org
We finally have created the first "inside view" critique of EA.
The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.
Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.
Original Version Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Counterargument: Tribes have internal structure, and so should the EA movement.
This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.
Feeling-oriented, and outcome-oriented communities
People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".
A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.
An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.
What are communities good for? What is good for communities?
The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)
As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".
A healthy double layered movement
Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.
This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.
Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.
Intentional Agents, communities or individuals, are not monolithic
Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.
The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.
Rationalists already accepted a layered structure
We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.
For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.
The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.
Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.
The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.
Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.
Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.
Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.
The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness
The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.
Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.
But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?
Here is an obvious place not to do it: Open groups on Facebook.
Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.
Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.
Conclusions
I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.
The first three initial measures I suggest for this re-design of the community are:
1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.
2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.
3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.
This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.